PPU-friendly Biomechanical Models for Virtual Medicine

The main focus of virtual medicine is to develop and deliver virtual reality based training and computer enhanced learning in medicine. Traditionally, medical students learn diagnostic, therapeutic and surgical skills through difficult clinical training on live patients. With the change in the health economics, the advances of minimal invasive surgery (MIS) and shortening of hospitalization time, source and availability of patient for teaching become a major problem. Advanced technologies such as virtual reality, visualization and dedicated hardware accelerator for graphics or physics processing can help making the learning process more efficient, engaging and flexible. It is possible to construct immersive environments to provide realistic visualization and haptics feedbacks for anatomy education and surgical training. In this paper, we would like to share our experiences of using a newly released physics processing unit (PPU) in developing various virtual medicine applications in virtual orthopedic trauma surgery, ultrasound guide biopsy training, virtual neuro-endoscopy and telemedicine.



Abstract-The main focus of virtual medicine is to develop and deliver virtual reality based training and computer enhanced learning in medicine.Traditionally, medical students learn diagnostic, therapeutic and surgical skills through difficult clinical training on live patients.With the change in the health economics, the advances of minimal invasive surgery (MIS) and shortening of hospitalization time, source and availability of patient for teaching become a major problem.Advanced technologies such as virtual reality, visualization and dedicated hardware accelerator for graphics or physics processing can help making the learning process more efficient, engaging and flexible.It is possible to construct immersive environments to provide realistic visualization and haptics feedbacks for anatomy education and surgical training.In this paper, we would like to share our experiences of using a newly released physics processing unit (PPU) in developing various virtual medicine applications in virtual orthopedic trauma surgery, ultrasound guide biopsy training, virtual neuro-endoscopy and telemedicine.

I. INTRODUCTION
Virtual medicine refers to the use of virtual reality based computer systems in assisting the learning and training in medicine.As MIS techniques advance and the role of digital images in medical research become more significant, the use of virtual medicine has been common in clinical practices.MIS, also called endoscopic surgery, relies on small incisions on the patient and makes use of endoscopic devices to carry out different sort of surgical procedures which can only be operated through open surgery in the past.The apparent advantages of such an endoscopic surgery include less trauma, reduced pain and quicker patient convalescence.However, extensive training is required for various MIS procedures so that the surgeons can be able to master the hand-to-eye coordination in complex situations.In many cases, training on animals or plastic models may not optimally deliver the realism of operations compared to real surgery.In light of this, virtual reality (VR) based training system [1,2] provide a nice alternative of being served as a training platform.VR surgical simulators are beneficial to both experienced surgeon and medical novice.
In recent decades, a new stage of virtual medicine has been symbolized through the construction of standardized human body datasets.The Visible Human Project (VHP) was proposed by the National Library of Medicine, US in 1994 and was conducted at University of Colorado, Denver, Colorado [3].The complete digitized datasets for normal male and female, which consist of cryosectional photographic images as well as the corresponding computed tomography (CT) and magnetic resonance images (MRI), were collected.Recently, the Visible Korean Human [4] and Chinese Visible Human [5,6] were acquired in 2001 and 2002 respectively.These data sets provide a thorough understanding of human anatomic structures.They also serve a platform for a systematic application of computational techniques in clinical medicine and biomedical research [7].A number of clinical applications utilize visible human data sets in MIS surgical planning and rehearsal.By mapping proper textures based on the original visible human cryosectional images, simulated pre-operative endoscopic views can be generated non-invasively in advance.For instance, VHP datasets have been used in navigating stomach, colon, spinal column, esophagus, trachea, and arch of aorta near heart, etc [8].
Recently, a dedicated hardware accelerator, called PPU, has been developed to boost the computational power of simulating dynamic motion and interaction [33].PPU aims to deliver an updated physics which define a different level of dynamic motion and interaction and accelerate physically-based simulations.The PPU-based SDKs provide a user-friendly development environment for real-time physical-based applications.One key highlight is the tera bit per second (BPS) internal, read/write memory bandwidth.Multiple processing cores are provided for the particular workload and data types associated with the geometric and linear algebraic calculations for physics.Multiple independent processing elements can support interaction between different types of objects at a relatively large scale such as smart particles, fluids cloth rigid bodies and soft bodies.The memory architecture is able to perform scatter-gather at high bandwidth in order to efficiently handle sparse data representations.
In this paper, we discuss how we exploit the relative computational advantage of PPU in surgical simulation, particularly, in simulating the various different bio-mechanical behavior of soft tissues, body fluid in virtual medicine applications.Our PPU-related surgical simulation projects will be discussed in detail; the development of virtual orthopedic trauma surgery, ultrasound sound guided biopsy training, virtual neuro-endoscopy as well as virtual telemedicine briefly shaped our current and future research and development on PPU-based virtual medicine.Our latest advancement in segmentation, imaging, visualization, user-interface design and virtual surgery will also be discussed.

II. PPU-FRIENDLY BIOMECHANICAL MODELS
The relative computational advantage of the newly released PPU is exploited for simulating the biomechanical behavior of soft tissue and body fluid in real-time.In this section, we shall briefly describe how we use PPU in modeling soft tissue, body fluid and their interactions.

Deformable models for soft tissue
A Hookean elastic solid is a solid that obeys Hooke's law, i.e. the stress is linearly proportional to the strain.According to biomechanics literature, soft tissue is a non-homogeneous, anisotropic, non-linear (non-Hookean) viscoelastic multicomponent material [9].Both collagen and elastic fibers can be considered as linearly elastic (i.e.Hookean), however, due to the non-uniformity of its structure among different soft tissue, the stress-strain relation of the soft tissue is non-Hookean.Elastin contributes to the deformation response at low strain levels while collagen plays a more important role as the strain level increases.Fig. 1(a) depicts a general non-linear stressstrain model of soft tissue.The anisotropy of soft tissue is mainly affected by the orientation of its pre-stress lines, contour lines and wrinkle lines.However, many soft tissues, e.g.skin, can be regarded as statistically homogeneous in many cases.The viscous behavior is mainly played by the gelatinous ground substance.PPU provides built-in linear mass-spring modeling.Similar to a typical mass-spring system, we can define the mass of an individual node and the elastic stress-strain relationship of an individual spring.The stress-strain curve of the linear mass-spring model is characterized by 5 parameters: the compression force, the compression length, the rest length, the stretching force and the stretching length.As we have mentioned that the elastic properties of most soft tissues are not simply Hookean, it has a very low stiffness at the beginning and an extremely high stiffness after the tissue is stretched to a certain extent.A bi-modular stress-strain relation can approximately describe such an elastic behavior of soft tissues.

Stress
By approximating to a piecewise linear function, we form the bi-modular stress-strain curve.Based on this curve, we can use a non-linear mass-spring system to simulate the tissues to produce a convincing dynamic effect of body tissue.
In order to exploit PPU in maximizing the simulation performance, we extend the PPU-accelerated linear mass-spring system to model this bimodular behavior.The key is to combine two Hookean elastic springs in order to exhibit bimodular stress-strain constitutive relation.We connect two Hookean elastic springs, where each of them exhibit different elastic properties, in parallel.In contrast to one single linear spring, there exist two more parameters in one bi-modular spring, namely the turning length and turning force, corresponding to the location (turning point) where the change of stiffness takes place.The first linear spring is in effect in the first phase of the stress-strain curve, while the second linear spring becomes activated when the turning point is reached.
Since the bi-modular springs are constructed with multiple Hookean springs, the decomposition into linear springs can be derived during the deformable model is created.Fig. 1(b) shows how the two linear stress-strain curves are combined to form the bimodular curve.We denote K1 and K2 as the stiffness of the two springs, which constitute one bimodular spring, subscript 1 and 2 corresponds to the two different Hookean springs.Suitable values of K1 and K2 have to be learnt through an optimization procedure; the objective function of such a process is modeled based on the experimental values obtained in various biomechanics literature.The resultant stiffness and force parameters are used to construct our bimodular spring for different soft tissue modeling.We shall discuss how we apply these models in various surgical simulations.

Fluid biomechanics
About 80% of human body comprises of fluid, so the biomechanical fluid model contributes much to the simulation of various kinds of surgery.In particular, blood modeling is one of the indispensible components in most surgical simulation application, e.g. in orthopedics training and simulations of other percutaneous or interventional procedures.The study of hemodynamic involves an in-depth investigation on advanced computational fluid dynamics (CFD) as well as the rheological properties of blood.Interactive modeling of hemodynamic is an even bigger challenge.It has been considered as one of the most complicated tasks in surgical simulation.
PPU provides a built-in support of fluid modeling through a well-known Lagrangian particle method called smoothed particle hydrodynamics (SPH).It is capable of simulating a Newtonian fluid through proper setting of a set of scalar quantities (i.e.A) includes the density, pressure and viscosity.Smoothed Particle Hydrodynamics (SPH) is one of the efficient particle based methods initially proposed for the simulation of astro-physical problems such as stars [10].This technique was first introduced to the computer graphics community to depict fire and other gaseous phenomena [11].Recently, Daenzer proposed a method to simulating bleeding and smoke in virtual surgery based on particle system [12].Muller et al. [13] adopted SPH in bleeding simulation with models of up to 3000 particles.For many sort of body fluids, such as blood, are non-Newtonian fluids, i.e. the viscosity is not a constant throughout the effective range of shear rate undergone in the flow.We have proposed to approximate the non-Newtonian behavior of blood by dynamically adjusting the value of viscosity in regular time steps.The effects of the simulation can be tuned via adjusting the values of density and viscosity as well as the other parameters in the basic equation of SPH.In later sections, we will describe how we make use of the PPU-based SPH models in simulating various surgical procedures such as bleeding.

Contact models
Realistically simulate interactions between body fluid and soft tissues such as blood-skin interaction can greatly enhance the effects of surgical simulation.In PPU, a so-called continuous collision detection (CCD) mechanism is provided for fast collision detection of moving objects, which is suitable for many surgical phenomena.This mechanism assumes the fluid (if fluid is involved) is modeled through particles (SPH-based).CCD requires a skeleton mesh with triangles and vertices, which is embedded in the involved objects.Soft tissues colliding with fluid can be simultaneously represented with a deformation model, a collision model and a visualization model, where these models may have different data structures but geometrical topology should be consistent.Two key parameters, restitution coefficient and adhesion factor, can be defined and adjusted dynamically in order to obtain different colliding effects.The restitution coefficient control how much the particles bounce when they collide with soft bodies while the adhesion factor control how easy a single particle slide along a surface (mesh).
When a particle collides with a rigid body, either static or dynamic a collision impulse is applied to the particle.The API of PPU SDK allows the restitution coefficient to be defined for collisions with both static and dynamic shapes separately.These are specified using a dynamic collision adhesion parameter and another static collision adhesion parameter.These parameters define to what extent the particles bounce back when they collide with other actors.The range of the restitution coefficients is 0 to 1.A value of "0" refers to a state that the particles do not to bounce and loose their entire momentum in the direction of the contact normal.On the other hand, a value of "1" refers to a state that the particles are reflected without any loss of momentum.In addition, an adhesion (friction) factor can also be set for controlling how easy particles can slide along a surface (mesh).A value of "1" means that a particle will loose its entire momentum tangent to the surface, i.e. it sticks to the surface.While a value of "0" means that the particle looses no momentum tangential to the surface.The adhesion factors can be set using dynamic collision adhesion and static collision adhesion.Such a CCD mechanism has been used extensively in our surgical simulators.

Simple tissue cutting
Soft tissue cutting is one of indispensable operations for many surgical simulation applications [14].One favorable capability provided by PPU for modeling cutting is the fragment model and the cloth model.It provides a series of functionality for simulating cutting-alike effects.For example, a thin membrane can be constructed as a built-in PPU cloth.It is modeled through a mass-spring damper system.By setting appropriate stretching, bending properties etc.By default, the cloth can be set to resist stretching to certain degree before it is torn into smaller pieces under a breaking force.Several parameters such as the thickness, density, bending stiffness, stretching stiffness, and damping can be tuned in order to achieve a desirable effect.One can also enable a tearing property by setting an NX_CLF_TEARABLE flag in the cloth descriptor during creation.
One of the examples in using PPU cloth in medical simulation is to model an elastic layer of membrane.The default tearable feature in cloth can mimic a cutting operation applied on the membrane.It is allowed to specify a set of so-called tearing vertex on the cloth explicitly.In this sense, one can easily define a cutting plane (defined by the cutting tool) and identity a set of vertices to be torn by the cutting tool.Currently, either tearing vertex or tearing lines can be set.It would be convenient to use the tearing vertex in modeling puncture like tissue cutting while the tearing line is suitable for modeling scalpel-based cutting procedures.

III. VIRTUAL ORTHOPEDIC TRAUMA SURGERY
Orthopedic trauma surgery simulation is a challenging work due to the co-existence of various kinds of body tissues with heterogeneous biomechanical properties.We have implemented a novel orthopedics trauma surgery training system, with both real-time and realistic soft tissue deformation and bleeding simulation, by exploiting the computational power of PPU.The upper limb region of the selected CVH data is segmented into dermis, subcutaneous fat, fascia, deltoid muscle, brachial muscle, triceps brachii muscle, biceps brachii muscle, coracobrachialis muscle, extensor carpi radialis longus muscle, artery, vein, nerve, bone and bone marrow.A multilayered mass-spring deformable model is generated based on the segmented CVH data set.Details regarding CVH data sets can be found in [15，16，17，18，19，20，21].A bi-modular stress-strain modeling scheme is employed to simulate the nonlinear viscoelastic properties of skin and skeletal muscle.The modeling parameters are optimized through a simulated annealing process based on the experimental stress-strain relation stated in biomechanics literature.Bleeding is simulated through the use of smoothed particle hydrodynamics (SPH).A GPU-based marching cubes algorithm is also developed to accelerate the rendering process of bleeding simulation.Experiment shows that our system can provide tissue deformation and bleeding simulation with both interactivity and realism in complex surgical environment.Our system is now being deployed at the Department of Orthopaedics and Trautomay at the Prince of Wales Hospital, Hong Kong as a daily teaching application [22].

Tissue Simulation
We use manual segmentation methods to identify different anatomical structures from CVH cryosectional slices and then extract surface mesh from the segmented data as stereolithography format.A re-parameterization process is performed on these preliminary meshes to simplify the mesh structure.To begin the model generation, we first compute the centroid of every triangle of the surface mesh (Fig. 3. (a)).From every centroid, the bottom masses of first layer are generated by shooting a ray from the centroid towards the centreline until hitting the next layer.We adopt a voronoi graph topology to organize the new masses at the bottom of the first layer.Every mass of the surface mesh has a queue storing the masses generated from the triangles with it as a vertex.After finishing generating all new masses, sort every queue based on the triangle ID of the surface mesh and connect the sorted masses one by one.Thus, the bottom of the first layer form a surface composed of many polygons (Fig. 3. (b)).We triangulate every polygon and connect top and bottom masses of first layer to create tetrahedral form of skin layer while cross-linked cubic block are constructed by crossly connect the masses to construct the fatty and skeletal muscle layers (Fig. 3.(c)).The multilayered mass spring model generated from CVH upper limb is shown in (Fig. 3. (d)).
To ensure the realism of tissue deformation, we implemented a bimodular model to approximate the exponential like shape stress-strain curve of human skin and passive property of skeletal muscle [23].The parameters of this model are optimized by simulated annealing algorithm.The relative computational advantage of the PPU is exploited for simulating bleeding based on SPH (Fig. 4. (a)).The realism is achieved by adjusting the biomechanical properties of the simulated blood and providing proper collision detection between the simulated blood and soft tissues.Furthermore, a GPU-based marching cubes algorithm is developed to accelerate the rendering process (Fig. 4. (b)) [24].

IV. ULTRASOUND-GUIDED BIOPSY TRAINER
One of the most fundamental, but difficult, skills to acquire in interventional radiology is ultrasound guided biopsy.Ultrasound-guided biopsy is performed to find an abnormal area of tissue and guide its removal for examination.To ensure safe procedures can be performed by trainee surgeon, extensive training, especially in the needle placement, is essential.In this sense, we propose a virtual reality based training system for practicing ultrasound guided biopsy procedures.We propose a novel approach in the simulation of ultrasound imagery through stitching multiple ultrasound volumes acquired by a dedicated ultrasound probe [25].Stitched ultrasound volume is further fused with the corresponding computed tomography (CT) volume.Such a correlation eases the creation of a virtual anatomic model for an interactive navigation (Fig. 6).In addition to the quality visual renditions, we also propose a six degrees-of-freedom (DOF) force model in order to deliver users with a realistic haptic rendering of needle insertion throughout the training sessions [26].Our system can truly provide a non-invasive way of training biopsy practices without imposing extra risks on live patients.
Actual biopsy procedures are performed by two hands.To simulate these scenarios, multiple haptic devices should be provided in the virtual environment.However, it is a challenging task to support multiple haptic devices while maintain both visual and haptic update rates at an interactive level.Usually, to obtain realistic haptic sensation, the haptic rendering loop must maintain a 1000Hz update rate, which is much higher than that of graphic rendering (about 20-30Hz).This situation is more stringent when multiple haptic devices are involved.Therefore, the architecture and algorithms of such applications should be well designed to ensure that the system performance can fulfill the requirement of human-machine interaction.To present smooth force feedbacks to end users, a relatively high update of haptic frame has to be guaranteed.Otherwise, user can feel discontinuous force perception.In order to maintain high haptic frame rate, efficient collision detection between the virtual tools and anatomies would be of particular importance for presenting user a realistic force feedback.We have deployed the built-in continuous collision detection (CCD) mechanism in PPU to accelerate this part.There exist two basic interactions of virtual tools during the biopsy procedure.The first interaction is to use an ultrasound scanning probe to acquire planar ultrasound images while the second one is to use a needle to obtain target tissue (see Fig. 8).
To model the first interaction, we model the skin surface with a triangular mesh while the dedicated ultrasound probe is represented by one single virtual point.A 3 DOF force feedback device is used for the virtual probe.The force modeling of the scanning process is to detect how deep the virtual probe is going inside the body and compute the resistance force through the linear Hooke's model.This interaction can be accelerated by a PPU implementation.The modeling of realistic force in a simulated needle insertion would be a little bit more involved.In addition to the collision detection between the virtual needle and the skin surface during the pre-puncture stage, the detection of needle in contact with other inner layers of organ surface has to be tracked.We have focused on the abdominal region where the skin, muscles, bones of the liver layer have been extracted for the modeling of needle insertion force.The realization of PPU-based collision detection can greatly enhance the real-time responsiveness of our training system.

V. VIRTUAL NEUROENDOSCOPY
Neurosurgery is a discipline of medicine and that specialty of surgery which provides the operative and non-operative management (i.e., prevention, diagnosis, evaluation, treatment, critical care, and rehabilitation) of disorders of the central, peripheral, and autonomic nervous systems, including their supporting structures and vascular supply [27,28].However, the limited view and orientation throughout the intervention increases the inherent risks of serious complications.To overcome these drawbacks, we propose the use of a virtual endoscopy system to improve the planning of and orientation during this procedure.
To access the ventricle of the brain, one needs to go deep into the human brain.If we build a new path to access the ventricle, it will highly damage the healthy brain tissue.In order to minimize the destruction of brain tissue, we can use the existing cavities in the brain to be the path for the movements of the endoscope in minimally-invasive neurosurgery.In our simulation we focus on the ventricular system of the human brain.The operation starts from drilling a hole through the skull, place a tube, called catheter, along the hole and the existing cavities of the brain into the third ventricles.Fig. 6 shows one of the views observed after the endoscope has entered the lateral ventricle.
Because of the water-like optical property of the cerebrospinal fluid (CSF) -which fills the ventricular system, viewing of the surrounding tissue is possible.Due to respiration and other metabolic activity, the CSF flows through the cavities inside the human brain.In some cases, the connection between the third and fourth ventricle -the aqueduct -is blocked by occlusion or stenosis.This causes a serious disturbance of the natural flow of the CSF, which frequently leads to a dangerous increase of pressure inside the skull and can damage the brain severely.If the doctor discovered such case has happened, he will try to clear the aqueduct or expend the aqueduct by using the minimally-invasive neurosurgery which had described above.The clinical picture of this hydrocephalus is one of the major indications for a minimally-invasive intervention in the ventricular system, where a bypass is realized by perforating the floor of the third ventricle.After that, entering the cistern through the membranous floor of the third ventricle (shown in Fig. 6 (a) and (b)), and various surgical operations can be performed within the ventricular space.We propose a neurosurgical simulation system which integrates the virtual reality technology and the power of PPU.Quite a number of solid-tissue interactions are involved in this process, we model various surgical tools as static shapes while the soft tissue are modeled as dynamics shapes (in form mesh).In this sense, we can apply the default PPU-based CCD to handle these operations in an efficient manner.Fig. 7 shows a comparison between the actual endoscopic view during the surgery and our simulated view; both of the views are located at the entrance of the interventricular foramen.The overall graphical user interface is shown in Fig. 8 where an anatomical navigation view, a MRI view and a simulated endoscopic view can be observed.The floor of the third ventricle (a thin membrane) is modeled by a PPU built-in cloth model.As we have mentioned previously, the endoscopy and other surgical tools are modeled by PPU static shapes.When the tool collides with the membrane, certain degree of deformation is allowed.Once the applied force is large enough to break the membrane, the cloth model would be torn accordingly.This can simulate a cutting operation being applied on the floor of the third ventricle.Fig. 11 shows the process of penetrating the membrane.In Fig. 11 (a), (b) and (c), the virtual tool is moving towards the membrane, while Fig. 11  (d) shows the opening of the path into the third ventricle.

VI. COLLABORATIVE SURGICAL SIMULATION
It is a challenging task to design and implement a high performance collaborative surgical simulation system because of the sharp conflict between the requirement of maintaining high levels of state consistency and the limitation of network transmission capacity [29].In single-user stand-alone surgical simulation system, it is necessary to realistically model the interventions and scenarios in surgery, which involves real-time interactive deformable simulation of the biomechanics of human tissues.Improving the accuracy of simulated tissue responses would reduce the simulation speed and increase response latency in interactive procedures.
The work becomes even more demanding when surgical simulation is extended to simulate co-operative work among multiple medical professionals over the network.Stringent network conditions such as restricted network bandwidth and stochastic network delay make it difficult to synchronize the users and maintain a high level of state consistency.
In this wok, we propose an integrated framework to support the development of collaboration surgical simulation system.First, PPU-based deformation presented in section 2 is equipped in this framework to support interactive operations and accordingly reduce the response latency.Second, a client-server (CS) architecture and an extensible transmission protocol are proposed to maintain the system consistency.Third, a number of network management approaches are implemented to ensure the reliability and efficiency of the framework.In addition, middleware based methods are adopted in our design and implementation to facilitate the transformation procedures from standalone simulators to collaborative applications.

Services Management
The hierarchy of the main modules in our framework is shown in Fig. 10, where each layer provides some services for the layer above.A friendly user interface is provided at client side to allow user to visualize and interact with the surface or volumetric models generated from CT, MR or other medical image datasets.Deformation simulation of virtual tissues or organs is carried out using deformation models when external forces are applied to the nodes of these objects.The architecture is extensible to integrate commercial haptic devices.The database stores system parameters, image datasets, and the nodal information of the 3D reconstruction models.The Network Infrastructure layer (the bottom layer) is responsible for establishing network connections through the socket programming interface using the Wrapper Faç ade pattern [32].Reliable connection is enabled by using the TCP.
At server side, the core component is the Service Manager which is responsible for initializing and managing all kinds of services.It receives requests from remote or local administration console, and calls the corresponding services to fulfill those requests.A topology view of the clients is kept in the Client Acceptor and users can join and leave the virtual surgical simulation until the server socket is closed due to extended idle time or other constraints.The Message Queue provides a FCFS (i.e.First-Come-First-Served) queuing mechanism to buffer messages for the Command Handler.The latter is responsible for transmitting the incoming messages to the corresponding destinations.Other tasks of the Command Handler include the multiplexing and demultiplexing of client messages according to communication protocol defined in the Configuration module.Middleware-based methods are used to design and implemented these modules.Fig. 11 shows the modules structure of the server side.

Collaboration Mechanisms
Two kinds of mechanisms are provided in our framework to support synchronous collaboration in surgical simulation: coupling control and token control.Coupling control decides which operations should be executed collaboratively among all users and which operations should be handled individually by the each participant.For example, deformation operations such as stabbing, stitching and cutting are required to be transmitted to all participants in order to demonstrate surgical procedures globally; whereas some transformation operations such as translation, rotation and scaling are considered as individual operations since every user can choose a preferred view in the procedure.Participants are given the right to independently choose the coupling operations.In addition, the simple and well-known token-control mechanism is extended to regulate the operations of the multiple participants made on the common virtual organs.

3D and 2D Synchronization
For different applications, different sizes of packets need to be sent in order to save the resources.Two types of packets are designed in our framework, one for 3D transmission, and the other for 2D transmission.We also added the control to ensure the transmission will use the proper packet.
For sending 3D transformation data, we choose quaternion, which is a fourth dimension extension of complex numbers, as a more compact form of an orientation representation.Comparing with the rotational matrix representation, it saves three-fourth of the bandwidth.Supporting lower-end machines is essential in many cases.However, low-end machines may not have enough computational powers in achieving high quality visualization.To solve this problem, we send the frame-buffer data after every frame of 3D rendering is done and then we display the images on the low-end client-side.In this way, the low-end client-side simply receives images containing screenshot of each resultant rendering and displays them.The required computational power is thus lowered significantly.

Experiments
The proposed framework has been implemented for Windows and UNIX platform using Visual Studio 2005 (C++) and Eclipse SDK 3.1.0(Java) respectively.Network infrastructure is developed based on Winsock API and J2SE5.0.Graphical rendering are achieved with OpenGL.A series of experiments are performed to evaluate the timing performance and quality of collaborative simulation for our framework.The server runs on a computer with such configuration: Pentium(R) 4 CPU 3.20GHz, 1024M RAM and NVIDIA GeForce 6800 display adapter.All clients connect to the server through a 10Mb intranet.Each experiment simulates real the manipulations of certain surgical procedures and has different participants.The network latencies is about 200-300ms when there are five participants and it is about 500ms when the number of participants is increased to eight.The results are encouraging since it is acceptable for virtual surgical training and rehearsal.The average frame rate in these experiments is maintained at above 30 frame per second, indicating that smooth graphics rendering can be achieved with the proposed framework.
Collaborative deformation performed by applying external forces to surface models of the stomach and ligament of virtual human are shown using in Fig. 12 (a) and (b) respectively, in which a client manipulates virtual ligament and stomach in one machine while another client can visualize the resulting deformation immediately through our framework.Due to the ineluctable network latencies, some differences exist in the views between the two clients during the deformation process.These results demonstrate that the proposed collaborative simulation framework can be used in the development of collaborative surgical simulation system.

VII. CONCLUSION
PPU has been successfully used to simulate real physics with the scale, sophistication, fidelity and level of interactivity that alters in a number of immersive physical games.In this paper, we have outlined our virtual medicine research and application related to PPU.In particular, the PPU related research in carried out in CUHK has been discussed in detail, while related applications such as virtual orthopedic trauma surgery, ultrasound-guide biopsy trainer, virtual neuro-endoscopy and virtual telemedicine briefly shaped our current and future research and development on visible human-related virtual medicine.Our experimental results have demonstrated that PPU can dramatically shorten the computing time even if the structure of the models is relatively complex.To the best of our knowledge, our work has been the first to use PPU on surgical simulation system.

Fig. 1 .
Fig. 1.(a) The stress-strain relation of soft tissue, and (b) the bimodular spring are shown.

Fig. 2 .
Fig. 2. The computational model of SPH, (a) principle of SPH (b) surface extraction of the blood

Fig. 3 .
Fig. 3.The generation process of multilayered deformable model.(a) base triangular mesh (b) centroid-connected voronoi graph (c) tetrahedral block and cross-linked cubic block and (d) the upper-limb model from CVH.

Fig. 5 .
Fig. 5. Visualization results.(a) deformation caused by pulling the mesh and (b) bleeding simulation when accidental cutting at blood vessel.

Fig. 5 .
Fig. 5.(a) shows the visualization result of the skin deformation caused by a pulling force while Fig. 5.(b) shows the bleeding simulation when accidental cutting at a blood vessel during the training procedure.

Fig. 6 .
Fig. 6.(a) The simulation of a 3D anatomic view, and (b) the corresponding simulation of ultrasound imagery is shown.

Fig. 7 .
Fig. 7. (a) The overview of the hardware setup, and (b) a close-up view of the two-hand haptic feedback system are shown.

Fig. 8 .
Fig. 8. (a) View before penetrating the floor of the third ventricle and (b)Opening on the floor of the third ventricle.

Fig. 12 .
Fig. 12.The architecture of our framework.(a) Client-server architecture for dynamic state management and database synchronization.(b) The modules and their relationship of client and server side.

Fig. 13 .
Fig. 13.The modules structure of the server.