Methodology for Controlling Contact Forces in Interactive Grasping Simulation

The paper proposes a new methodology to interactively simulate grasping of virtual product prototypes with the goal to evaluate the contact forces between the grasping hand and product as well as the load on the human arm. Interaction between product concepts and the users happens in a virtual environment, in which the user controls a virtual hand interactively. The contact between the virtual hand and the grasped product is simulated and visual feedback is provided to the user. Controlling the virtual hand interactively in real time holds many challenges. One of the challenges is mapping the motion of the user to contact forces, which then results in stable grasping of objects. In this paper we present a new methodology to convert and map the measured position of the real hand into contact forces so that the contact between the virtual hand and the object remains stable. Our approach applies a multi-objective optimization that takes into account the posture and anthropometric properties of grasping hand, as well as the penetration of the hand in the grasped virtual object in order to find the optimal arrangement of contact forces. The paper reports on the principle of our grasping control methodology as well as presents some test cases to show the advantages and disadvantages of the proposed approach.


INTRODUCTION
User evaluation of product concepts for haptic interaction plays an important role in the design of handheld devices, bottles of douche gels and shampoos, in which the phenomenon of grasping needs to be evaluated. It provides valuable information for the designers from ergonomics, user experience, and product behavior aspects. Though there are several methods available to conduct user evaluation, user studies, and/or use context exploration of handheld devices, most of them requires the existence of a real product or a physical prototype. User evaluation of products in virtual environments in the form of a human-in-the-loop assessment based on realistic computer simulation is still in its infancy. The existing approaches are mostly based on non-interactive techniques, in which a virtual avatar is controlled by prescribed motion or by predefined forces. On the other hand, some progress , have already been achieved with human-in-the-loop type of product interaction and real time simulation of grasping based on measuring contact forces on the hand of the user when interacts the physical prototypes, and then the measured forces are applied in different virtual hand-virtual prototype interaction experiments [14，24]. In the case of a direct interaction with virtual prototypes, however, the above mentioned methods cannot be taken into account because it is impossible to establish physical contact between a physical object (i.e. the hand of the user) and a 3D virtual image (i.e. the virtual product). To make the direct interaction with virtual prototypes possible, control mechanisms should be developed, which enable the user to change the magnitude and location of contact forces, and to easily change grasping postures. One possible approach to control the grasping forces is to create a relationship between the penetration of the hand into the virtual object and the magnitude of grasping forces. This approach, however, has to take into account (i) that the user is not able to position his hands relative the grasped virtual object in a stable way, (ii) measurement errors of devices tracking the position of the user's hands and (iii) the distortion of the displayed 3D image which can negatively influence perception of virtual object. To achieve proper control of contact forces and to simulate the interaction with handheld products accurately these errors need to be compensated for.
There are two challenges in the case of performing interaction with virtual objects. The first challenge is to realize real time simulation of interaction and the second is to facilitate natural and realistic control of contact forces in grasping of the virtual product. A typical human-in-the-loop grasping simulation consists of the following steps: (a) capture and processing of the hand motion data, (b) measurement or computation of the grasping/contact forces, (c) simulation of interaction of a virtual hand with the grasped object, and (d) providing visual, tactile and haptic feedback.
Our implementation of grasping simulation captures hand motion data by optical tracking, which measures the position of various markers placed on the user's hand. The markers are tracked with the speed of 200 fps and their measured 3D positions are used to determine the posture of a virtual hand and its position in the 3D virtual space. In this paper we propose a new methodology, which measures the motion of the human hand, computes the intended contact forces based on the penetration of the hand into the virtual object, simulates the behavior of the virtual object, and provides visual feedback to the user. The proposed methodology takes into account the anatomy of the human hand at determining the grasping forces. In addition, it makes possible to control the grasping forces based on the penetration of the human hand into the virtual product model and the posture of grasping. The use of anthropometrical data in the grasping control enabled us to achieve real time grasping simulation with reasonable accuracy. In this paper we report on the principle of controlling grasping forces that operates with mapping the penetration of the virtual hand into the grasped object onto grasping forces. This principle has been validated in various user studies.

II. STATE OF THE ART
In various animation and simulation tasks, forward kinematics or inverse kinematics are used to reconstruct the motion of the hand based on measured data. When inverse kinematics is applied to determine the motion of the human hand, positions and angles of the joints should be computed based on the measured position of the finger tips and a set of constraints. However, this problem is inherently underdetermined in most cases.  [14]. They modeled the finger as a kinematic chain of three revolute joints and described the joint angles as a vector. The compliance was represented as a collection of torsional springs that, when displaced from a reference configuration, produced joint torques by a relation.
In the case of forward dynamics, the position and angle of the joints are computed based on the torques and forces applied to the joints. For instance, a kinematic model for flexion and extension of the fingers has been developed by Lee and Kroemer [17]. Their model is based on the assumption that the moment arms of the tendons at the joints are constant. Considering external forces affecting the joints, they compute the forces in the tendons for the given joint configuration. Albrecht et al. developed a system based on a reference hand model, which is animated by taking into consideration of the muscle contraction values [1]. They introduced a hybrid muscle model that comprises pseudo-muscles and the geometric muscles. While pseudo-muscles control the rotation of bones based on anatomical data and mechanical laws, the deformation of the geometric muscles causes realistic bulging of the skin tissue. As a result, the created animations automatically exhibit anatomically and physically correct behavior. However, their model does not include movements of the bones due to tendon movements, and detection of collisions among the parts of the hand.
Real time simulation of deformation of grasping hands due to has been implemented based on a particle systems model developed by Shieh et al. [27]. They applied a unified mass-spring representation to the human hand and the grasped object. However, their current simulation system suffers from some shortcomings. This current simulation model has only limited response surfaces according to the movement of the virtual hand. Although a deformable model based on physical rules is simulated in their system, non-linear behavior of human tissue, stick slip effect of grasping contact and other advanced phenomena is not considered due to the low resolution of the hand model used for the sake of real-time ability.
Measurement of the grasping forces can be important in grasping simulator development in order to validate the simulated results. The maximal forces exerted by the fingers were measured using strain gauge transducers [7]. The developed model showed that, for simple tasks, the finger strength could be also predicted from measured contact forces. A finger device is presented in [15] to accurately assess the fingertip forces and torques on three fingers.
An automatic ergonomic assessment and ease of the finger motions in operating the user interface is presented in [8]. A digital hand model (called "Dhaiba-Hand") is created based on kinematic analysis of motion captured data and MRI scans. The minimum variance model was used in [28] for evaluating grasping motions and postures with a complete model of the hand and the arm.
The role of visual cues is very important in grasping simulation. Cuijpers et al. have investigated the role of haptic feedback when grasping (virtual) cylinders with an elliptical circumference [4]. They showed that both visual and haptic information are important for planning, reaching and grasping. Mason has assessed the role of graphical representation of the hand in reaching movements to acquire an object in virtual reality environment [18].
The interaction model of the multi-fingered hand is mapping the fingertip forces into a resultant wrench on the object with regard to the center of mass. A stable grasping maneuver is the movement of the fingers to form a grasping posture and to completely restrain an object against any disturbance wrench. In case of robotic hands a well-known grasp planning system is "GraspIt!" [19], which can perform grasping posture evaluation (force-closure and grasp quality).
Each of the investigated work proposed some sort of control mechanism that works based on the measured data of contact forces. However, when direct interaction of the (virtual) hand of the user with a virtual object is required, there are no contact forces to be measured. Our approach addresses this problem and proposes a possible solution, which converts the penetration of the hand to contact forces.

Conditions of stable grasping
Fearing defined the following three conditions of stable grasp in terms of resistance to slipping [9]. The grasped object must be in equilibrium so that the sum of all forces and torques acting on the object is zero: In Equation (1) and (2), F i are the vectors of forces acting on the grasped object, and r i is the vectors of distances from a given point on the object to the point of action of the force. The direction of the contact forces arising on the hand should be within the friction cone, so that there is no slip at the fingers. This condition is expressed by Equation 3: where, is the friction coefficient representing the relationship between the normal force F n and the friction force F s . The friction coefficient is influenced by the properties of surface of the grasped object (e.g. material properties, surface finish), conditions of grasping (e.g. temperature, humidity), as well as by the properties of the skin (e.g. sweating, wear and abrasion of the skin). Thus, the friction coefficient for grasping, should be described by non linear models.
In an interactive grasping simulation process the user of the system must be able to control the contact forces on the fingertip in an intuitive and interactive manner in order to achieve realistic scenario of grasping. Depending on the applied hand model (i.e. kinematic, dynamic, or hybrid), the system should be able to provide appropriate means to control the position of the hand, and the forces exerted by the hand. Our previous study compared different mechanisms to control the stability of grasping as well as the accuracy of positioning fingers on the grasped object [24] for kinematic, dynamic and hybrid hand models. In all cases, the relation between the contact forces and the joint torques are of interest in order to evaluate the stability of grasping. The relation between the contact force and the joint torque for a kinematic and for a dynamic hand model, respectively, we adopt the model of Salisbury [25]: and , where is the torques and forces to be applied at the joints, J is the Jacobian matrix mapping the joint space (joint angles) to the Cartesian space (position and orientation of the contact points), and F are the generalized forces consisting of the normal forces, friction forces and soft finger moment at the contact points.

Controlling the magnitude of grasping forces
Although much advancement has been achieved in the development of tactile and haptics technologies for the last two decades, the application of tactile/haptics feedback to virtual grasping tasks is still limited. Haptic technologies working with mechanical principles (such as breaks, wires, pneumatic pistons) are limited in their usability for grasping simulation. Even the most advanced haptic gloves are limited in creating accurate contact on the proximal, metacarpal joints and on the palm, which limits their application to precision grasping. Power grasping tasks require proper haptic feedback not only on the fingertips, but also on the palm as well as on the proximal and metacarpal phalanges. Other types of haptic technologies work with a force field effect but they are typically facing occlusion problems. The generated force field can be obstructed by the hand itself or by other physical objects in the modelling space. In addition to these limitations, haptic devices have to cope with errors of measurements of the hand positions and errors coming from shaky hands. It is rather challenging for the users to position the hand around a virtual object and keep it stable. For this reason, control mechanisms are required, which provides the user an intuitive means to control the grasping (or contact) forces in the full range of anthropometric possibilities and able to compensate unintended movements of the hand and measurement errors of the tracking devices.
As discussed in section 3.1, stable grasping requires that the sum of contact forces and moments acting on the grasped object should be zero. In grasping virtual objects, it is practically impossible to place the hand around the virtual object in such a way that the contact and penetration of the hand in the opposite sides of the virtual object are the same. If the penetration is used as input to compute the contact forces, the virtual object always oscillates between the opposition spaces, since the computational simulation of grasping is done in discrete time steps and the object cannot some to a rest in the hand. It is typical that the hand penetrates on one side of the object more than on the other side, which repulses the object and forces it to move towards the other opposition space. In the following time step, the penetration will be larger on the other side of the opposition space depending on the rate of sampling, and it pushes the object towards its original position. We have experienced that even a small amount of oscillation largely influences not only the stability of grasping but also the increases probability of slipping. These issues can be addressed by overriding the penetration or the magnitude of contact forces and thereby reducing the oscillating motion of grasped object.

IV. METHODOLOGY FOR GRASPING CONTROL
To address the above problems, we propose a new force control methodology in simulating grasping. Fig. 1 shows the reasoning model of our approach. When contact is detected between the virtual hand and the grasped object, the simulation uses our grasping force control methodology, which specifies the intended contact forces between the object and the hand as follows. As the first step an algorithm sorts the contact points and penetrations into six clusters one for each finger and one for the palm. Each phalange of the hand contains one cluster of contact points, which may belong to a single or to multiple contact patches.
In order to determine the grasping posture the distribution of contact points on the hand is taken into account. The evaluation procedure is based on seven rules that are mapped onto the taxonomy. Section 4.2 presents the rules for defining the grasping postures. Based on the grasping posture and the distribution of the contact points, virtual fingers are defined with the goal to ease the determination of the stability of grasping. For each virtual finger the contact forces are calculated based on the penetrations of the fingers, thumb, and the palm into the grasped objects. The contact forces are then compensated based on multi-objective optimization method, in order to keep the grasped object in a stable position in the hand. Then the adjusted contact forces of the virtual fingers are redistributed on the initial contact patches and the simulation is executed with the compensated input.

Classification of grasping tasks
The literature presents many different types of taxonomies for classifying grasping tasks in manipulative interaction. We have adopted the most comprehensive taxonomy of grasping from Cutkosky [5], which distinguishes 16 different prehensile and power grasping postures. We have defined a rule-based method, which takes into account the distribution of contact points on the hand. Table 1 shows the mapping of our seven rules onto the particular grasping postures. Existence of contact points on the fingertips, thumb, palm, and multiple contacts on a single finger, as well as the orientation of a contact for multiple fingers enables us to determine the grasping posture with high accuracy. In the first branching point of the taxonomy, power and precision grasps are distinguished, which can be recognized by checking if there are contact points on the palm (RULE 1). In the group of power grasps, nonprehensile grasps are defined as single opposition spaces expressed as hook, platform and push grasping tasks.
Prehensile grasps are further classified as prismatic and circular types of grasping. We can distinguish prismatic and circular grasp by testing if the orientation of the contact normal forces on different fingers are in an angle that is greater than a given value (RULE 2). This condition is valid for both precision and power grasping. RULE 3, which is defined as having an opposition space on the same finger, has been introduced to separate power grasping of small objects from large objects. RULE 4 is used to distinguish compact disk and sphere grip by investigating if all parts of the hand has a contact point or not. The role of the thumb in grasping is taken into account in RULE 5, which investigates if the thumb is creating part of the same opposition space as the fingers or it defines a separate opposition space. RULE 6 simply expresses if the contact patches are located on the thumb and the index finger only. Finally, RULE 7 enumerates the number of fingers in contact.
The advantage of this method is its expandability both in terms of the taxonomy of grasping and in terms of the rules. In addition, the implementation of the rules is rather simple and can be connected to any contact simulation approach. The only condition is that the hand model should represent the phalanges and the palm as separate bodies, which facilitates deriving the list of contact point separately for each part of the hand.

Defining virtual fingers
As discussed above, penetration of the hand has to be transformed to contact forces in such manner that the contact forces are distributed among the contact patches in a realistic way and errors from placing the hand around the objects are eliminated. To achieve this, we have defined a mapping method that takes into account the anthropometric grasping force data and the grasping posture in the computation of the grasping force distribution over the contact patches. A contact patch on a virtual object, C o , is defined as a set of connected triangles C o = {t 1 , …, t n }, so that for ∀t i , t i ∈C o , ∃t k , t k ∈C h , for which it is true that t i and t k are intersecting, where C h ={t 1 , …, t m } is a contact patch on the hand model. Intersection of a pair of triangles t 1 and t 2 is defined as a pair of 3D points I t1t2 ={p t1 , p t2 } representing the distance of the penetrating edges in the faces of triangles. To determine the list of colliding triangles we are using the collision detection algorithm implemented in the PhysX engine. Representative penetration of two contact patches C o and C h is defined as: where N is the number of intersecting pair of triangles between contact patches C o and C h . The geometry of the virtual hand is defined as a set of rigid bodies represented by as a triangulated model. Taking into account its anatomy, the hand is decomposed into 15 phalanges and the palm. As a result, our virtual hand consists of 16 rigid bodies, denoted by B 1 ..B 16 . Each rigid body can have a set of contact patches C B ={C h1 , ..., C hN }, which can transfer contact forces and moments to the grasped object. The relation between the penetration and the normal force, friction force and friction moment of a contact patch is given as follows: where f(P C ) is a mapping function that establishes a relation between the penetration and normal force on the patch, F F is the resultant friction force vector, and M F is the resultant friction moment acting on a patch. This friction moment is limited by the size of the contact patch and the magnitude of resultant friction force. By using the concept of virtual fingers we can investigate the stability of grasping for given postures. Arbib et al. suggested that each of the functions of supporting grasping can be substituted by virtual fingers as a method of applying forces and moments [2]. He defined a virtual finger as an abstract representation and a functional unit of a collection of individual fingers and hand surfaces applying an oppositional force. Real fingers are grouped together into a virtual finger to apply a force or torque opposing other virtual fingers.
A state variable model for virtual fingers was defined by Iberal et al.
[13] by five variables: (a) the length of the virtual finger (VF) (from the centre of the contact surface patch to the joint where it connects to the palm), (b) orientation of VF relative to the palm, (c) the width of VF (the number of real fingers mapped into the VF), (d) orientation of the applied force, (e) amount of force available from the VF (mean, maximum and minimum), and (f) amount of sensory information available at grasping surface patch. Our approach extends this modelling of virtual finger with a friction torque. This extension is necessary to be able to address grasping situations presented in Fig. 2.As illustrated in this figure, the contact patches of the index finger and the thumb are exerting not only a friction force but also a friction moment in order to compensate for the rotation due to gravity.
We defined a virtual finger as VF={F N ; F S ; M S , p m , p a , P}, where F N ={min, actual, penetration, max} is the normal force that can be exerted by the virtual finger, F S is a friction function having values in the range of friction force, F S ={0…μF N }, M S is the sum of moments that can be exerted by the configuration of contact patches M S ={min, mean, max}, p p is the resultant point of action of F N ={penetration}, p a is the resultant point of action of F N ={actual}, and P is the sum of actual penetration of all contact patches.

Determining stability of grasping
The resultant normal forces are determined from the penetration and the change in the location and amount of penetration of the hand into the virtual object. We distinguish three cases:

1) The penetrations increased in all contact patches
belonging to a virtual finger.
2) The penetrations decreased in all contact patches belonging to a virtual finger.
3) The penetrations increased in some contact patches and decreased or remained the same in others.
The goal with distinguishing these three cases is to separate the intended and unintended grasping actions of the user. Case 1 and Case 2 represents when the user closes or opens his hands, which can be reliably determined as an intended action. Case 3 usually occurs when multiple fingers are represented by one virtual finger and there is a change in the distribution of forces. This change can be either intended or unintended. An example for intended change is when the user changes grasping posture, for instance, changing from 2 finger pinching to 3 fingers pinching. Unintended changes typically occur when the relative position of the hand changes compared to the simulated object. In these cases the user is not changing the arrangement of the fingers, but tries to control the relative position of the hand by applying balancing and extra forces. What we observed was in these situations the arrangement of contact points is frequently changing. In some cases the penetration on the same finger increases for some contact patches, while it decreases for others.
The normal force on virtual fingers is determined based on the change of penetration compared to the penetration in the previous frame. Fig. 3 presents the mapping function between the gradient of penetration and the gradient of normal force. The normal force acting on a virtual finger is given by Equation (8) , where is the normal force in the current frame, is the normal force in the previous frame, is the penetration in the current frame, is the penetration in the previous frame, is the elapsed time between frame n and n-1, and f() is the function describing the relationship between finger penetration and the change of normal force. Once the normal forces are determined for the virtual fingers, they need to be adjusted in order to compensate for the measurement errors and unintended changes in the force magnitude and point of action. A stable configuration of grasping forces is computed by a multi-objective optimization function: This multi-objective optimization function expresses by f 1 (x) that the magnitude of adjusted normal force of the virtual fingers should approximately be the same as normal force computed from the penetration, f 2 (x) that the point of action of the computed normal forces should be as close as possible to the point of action of normal force computed from the penetration, f 3 (x) and f 4 (x) expresses maximization of the friction forces and friction moments in order to compensate slipping, if necessary. With these, the equality conditions of stable grasping are defined as: In Equation (10), ΣF VFi is the sum of forces acting on the virtual finger, ΣF Cj is the sum of forces with other objects (e.g. if the object is placed on a table and is in the process of lifting up), and F G is the force of gravity. By Equation (11) the balance of moments is expressed around the centre of gravity of the object. The inequality conditions are defined as follows:  control metho n which 6 peop l hand around a thumb were i n Fig. 6. The a rectangular ome 10 cm an computed con e simulation w tional Journal of Virtual Rea ality, 2011, 10( ( Fig. 7 show ulti-objective o mputed by usi agnitude indica nd of the use netration and t rmal force. A ulti-objective o rmal force fro viation of 0.3N e thumb. Althou rces was not eq sition. We hav ject in a tilted agnitude of for own in Fig. 5. Measurement r ing descriptive easured data. F rm of a box dia represented by ow the standar nes show the f ample, the top erage contact viation was in easured values dex finger and 7 shows an e e index finger a ws a sample da optimization. In ing Equation ( ated that the g er, which was therefore also As in the bott optimization r om standard d N for the index ugh in this case qual to each ot ve observed th position, whic rces on index f

Normal force
Normal force