Design and development of virtual learning environment for animal experimentation
Abstract
This research is part of the VEA (Virtual Environment for Animal experimentation) project, conducted in collaboration with the Biological Engineering Department at the Laval University Institute of Technology, France. The primary objective is to develop a virtual reality platform that reduces the need for animal use in training by providing an immersive learning environment where students can master technical procedures and gestures. In this article, we present a novel VR-oriented pedagogical model that was used to guide the design of a virtual learning situation corresponding to a use case involving the placement of a catheter in an anesthetised rat. The development process incorporated pedagogical considerations and technological implementations, emphasising a user-centred design approach. To evaluate the usability of the VR application, a preliminary face validity study was conducted with 146 participants. The study used questionnaires to collect subjective data on user experience, interaction quality, and overall satisfaction. Results demonstrated high usability scores and positive user feedback, indicating the effectiveness of the VR application as a training tool. Key contributions of this work include the detailed blueprint for constructing VR-based educational situation and the empirical validation of the application’s usability. This research supports the potential of VR to replace traditional animal-based training methods, improving both ethical standards and educational outcomes.
Introduction
Research using the animal model for scientific purposes remains essential to protect human and animal health and the environment. Many medical, veterinary, pharmaceutical and toxicological studies, all of which have been ethically validated, have used the animal model, and could not have been carried out using any other model. These studies are governed by various legislations and regulations aimed at reducing the number of experiments on animals used for scientific purposes. These regulations encourage the development of alternative methods, and promote the use of the animal specimen only in the absence of other methods that can meet the objective of the study. In general, these regulations are based on the ‘principle of 3Rs’, which consists of Reducing, Replacing and Refining (improving) the use of animals as far as possible (Verderio et al., 2023).
The aim of the current work was to propose an alternative method based on Virtual Reality (VR) to reduce the number of animals used in animal experimentation training while allowing students to acquire a good understanding of the basic technical procedures and gestures before implementing them on a real specimen. Researchers in technology-enhanced learning (TEL) have shown great interest in VR technology due to its ability to simulate real-world conditions.
Virtual learning environments (VLEs) comprise technological infrastructures that transcend the physical constraints of conventional educational settings. These platforms facilitate student engagement with diverse cultural milieus, allow for the exploration of different environmental parameters, and provide remote access to mentorship (Ketelhut & Nelson, 2021).
Building on the concept of VLEs, VR Learning Environments (VRLEs) use VR to create dynamic teaching scenarios without real-world constraints, such as risk or uncertainty. Thus, a VRLE immerses learners in a virtual environment for educational purposes.
VRLEs have been used in many areas, such as science education (Shudayfat & Alsalhi, 2023) , surgery training (Mao et al., 2021; Bielsa, 2021) , decision making for intervention in high-risk sites (McIntosh, 2022), laboratory workflows training (Wermann & Pohn, 2022) , maintenance procedures (Guo et al., 2020), and understanding complex scientific concepts (Zhang & Bowman, 2022). These examples illustrate that learning and teaching within a virtual environment is a promising field that is undergoing a process of democratisation. Virtual technologies can contribute to promoting learning by users, since they are the main actors in terms of experimenting and practicing with virtual objects. It is therefore necessary to reflect both on the didactic situations themselves and on the concept of a scenario in order to ensure pedagogical efficiency.
In our research, we are principally interested in the design and operationalisation of learning situations in a VR context. Before implementing this type of learning activity, the teacher must pay particular attention to the pedagogical treatment (Wagner & Liu, 2021), which is based on an description of the pedagogical scenario (Bakki et al., 2019; Pernin & Lejeune, 2006; Tadjine et al., 2015). Anticipating learning interaction between the elements of a pedagogical scenario, learning activities, resources and tools will allow the teacher to ensure that the pedagogical scenario provides the aimed mental and cognitive processes for the learners (Marougkas et al., 2023).
In this paper, we present research conducted as part of the VEA (Virtual Environment for Animal experimentation) project, conducted in collaboration with the Biological Engineering department at the Laval Institute of Technology, France. The aim of this project is to develop a solution allows on VR that will allow the learner to master the correct gestures while respecting the ethics rules (the ‘Rule of 3Rs’)
Scientific Background
Digital technologies are increasingly recognised as important and effective tools in learning and teaching, offering more powerful and sustainable learning experiences than traditional teaching methods. They can be divided into different categories, of which VR is particularly important. The potential benefits of VR for pedagogical activities have been explored in recent years. In particular, we can highlight the potential that VR Learning Environments (VRLE) offer teachers to create original and dynamic pedagogical situations.
Many studies have explored the use of virtual technologies in learning environments (Hamilton et al., 2020; Martín-Gutiérrez et al., 2017). Previously, these experiments were expensive, and were limited to specific sectors such as the aerospace or nuclear energy industries. However, the use of VR is more feasible and affordable than ever for educational institutions and learners (Martín-Gutiérrez et al., 2017).
In this section, we give an overview of the potential of VR for pedagogical purposes, along with an exploration of the mechanism of interaction within the virtual environments.
The promise of VR in education
Maximising the full potential of VR in education requires strong pedagogical design. This involves carefully tailoring and integrating VR experiences into the curriculum, ensuring that they are aligned with the learning objectives, and providing appropriate guidance to learners through scaffolding (Dede, 2009). The design of VR-based educational experiences is guided by pedagogical principles that prioritise learner-centred, inquiry-based, and problem-solving approaches (Johnston et al., 2017). VR experiences are designed to promote active learning, critical thinking, and collaboration in accordance with social constructivist principles (Vygotsky, 1978). Research has shown that VR can increase learner engagement, motivation, and knowledge retention (Krajčovič et al., 2021; Yu & Xu, 2022). VR encourages experimental learning by providing an immersive and interactive environment. Learners can manipulate virtual objects, explore simulated environments and engage in experimental exercises that closely mirror real-life situations (Gomez, 2020). Participating in these interactive exercises promotes deeper cognitive processing and internalisation of information (Johnson-Glenberg, 2018). VR’s ability to replicate a wide range of situations and scenarios supports the iterative process of experiencing, reflecting, conceptualising, and experimenting (Mikropoulos & Natsis, 2011). In line with the concept of contextual learning, VR-based education highlights the relevance of learning in context rather than in isolation (Setyowati et al., 2023). VR allows students to participate in relevant, real-world activities, bridging the gap between theoretical understanding and practical application. This technology is an effective tool for authentic learning since it can simulate real-life learning experiences and professional environments (Sumardani & Lin, 2023). This approach improves learners' critical thinking skills and their ability to apply knowledge in real-life situations (Herrington et al., 2013).
The use of VR in education is also informed by the principles of scenario-based learning, which involves the use of realistic scenarios to facilitate learning (Ke & Xu, 2020). VR scenarios provide a safe and regulated environment for students to investigate, test, and learn from their mistakes, fostering a deeper understanding and application of knowledge (Kugurakova et al., 2023; McIntosh, 2022).
Design and implementation of virtual pedagogical activities in VR learning environments
In our work, we are particularly interested in the interaction (virtual action) between humans and the virtual world (virtual objects). In practice, a pedagogical activity is a succession of virtual actions that a user must achieve in a VRLE. Thus, to design virtual pedagogical activities, it is sufficient to describe the virtual actions associating to each activity. According to Mellet-d'Huart (2021), the key to using VR in education lies in the users' ability to anticipate, choose, and execute virtual actions within the VR learning environment.
For Patel et al. (2006), the aim of immersion and interaction is to promote the learning of gestures and behaviours by situational awareness and the skills transfer from the virtual to the real world.
These notions are rarely perfectly realisable in a given system, and mostly represent an objective to be realised as far as possible. However, they must be achieved, even modestly, in an application based on VR technology (Fuchs, 2011). In VR field, the identification, the specification and the design of virtual actions is very complex, as it is necessary to define the way in which interactions are realised as well as to specify the VR devices that will support these actions.
Fuchs (2017) categorises immersion and interaction in virtual environments into three levels: sensorimotor, cognitive, and functional. Sensorimotor interaction involves physical engagement and technical aspects. Cognitive interaction includes mental processes such as interface use and mental representations, integrated with virtual behavioural primitives (VBPs) (Coquillart et al., 2011; Richir et al., 2015). Functional immersion focuses on task-specific engagement, employing VBPs within a pedagogical framework to enhance user interaction and learning (Richir et al., 2015).
Fuchs (2017) provides an in-depth exploration of VBPs, categorising them into four distinct types of actions that define user interactions within virtual environments:
- Observing the virtual world: Involves acquiring a list of objects within the user's visual area, their distances, and orientations.
- Moving within the virtual world: Defines zones and tracks user movements and speed, whether transitional or rotational.
- Acting within the virtual world: Encompasses interception actions, selections (target acquisition tasks; Zhai et al., 1994), and manipulations (modifying object properties like position, orientation, colour, scale, and texture; Ouramdane et al., 2009).
- Communicating within the virtual world: Detects user intentions and presents information, enabling interaction with other users, virtual characters, or the application via voice commands or 2D/3D menus.
VBPs are refined into sub-categories to match specific VR actions, associating them with recommended VR devices. For instance, actions within the virtual world include selecting and manipulating, using VR gloves, controllers, or headsets. Unlike training simulators, VRLEs use virtual devices, allowing detachment from realism to enhance learning. In VRLE design, teachers select VR actions, information presentation and devices to optimise learning; VBPs via virtual devices such as VR headsets create sensory stimuli and facilitate interaction similar to real environments. VR-oriented educational activities, as defined by Marion (2010), include teacher-described tasks tailored to pedagogical goals. Following Fuchs (2018), the activities are grouped into four basic behaviours (VBPs) and divided into action sequences (Mahdi et al., 2019a). For example, interacting in the virtual world may involve actions such as 'distort', 'move' and 'cut'.
Educational context: VR simulations for animal experimentation learning
The principles of the 3Rs (Replacement, Reduction, and Refinement) in animal research have driven the development of VR simulations as a promising alternative to traditional animal experimentation education (Ormandy et al., 2022). VR-based simulations provide an immersive and realistic environment where students can practice various experimental procedures without causing harm to live animals, effectively replacing and reducing the use of animals in training (Lemos et al., 2022). These simulations also provide opportunities for refinement by enabling learners to develop essential skills before progressing to real-world settings. In fact, VR technology provides a simulated environment where learners can engage in realistic and interactive experiences that mimic actual animal experimentation. This approach offers several educational benefits:
- Ethical Considerations: VR eliminates the need for live animals in educational settings, addressing ethical concerns associated with animal use in experiments and fulfilling the replacement aspect of the 3Rs (Manciocco et al., 2009).
- Enhanced Learning: Interactive VR simulations facilitate a deeper understanding of animal anatomy, physiology, and behaviour through visual and experiential learning, thereby contributing to reduction by minimising the number of animals required for teaching purposes (Tang et al., 2020).
- Reproducibility: VR allows for the repetition of experiments without additional costs or ethical issues, promoting mastery of techniques and concepts while adhering to the principles of refinement by improving the quality of education without compromising animal welfare (Husain et al., 2023).
- Safety: Students can learn and practice potentially hazardous procedures in a safe and controlled virtual environment, ensuring both student safety and the welfare of animals (Debose, 2020).
Several VR-based tools have been developed and evaluated in scientific studies for their effectiveness in teaching animal experimentation while respecting the principles of the 3Rs. These tools can be categorised into four main types: Animal handling and behaviour simulation tools, surgical simulation tools, virtual laboratory environments, and physiological simulation tools.
- Animal Handling and Behaviour Simulation Tools: The FreemoVR system simulates animals freely moving behaviour, providing students with the opportunity to conduct experiments, manipulate variables, and observe outcomes without using live animals (Stowers et al., 2017). VR-based behaviour simulation has been shown to be effective in teaching concepts related to animal behaviour research, making it a valuable tool in reducing the need for live animals in behavioural studies (Kaupert et al., 2017). Similarly, RatCAVE, developed by Grosso & Sirota (2019), facilitates the expression of authentic and unrestricted behaviour in rats by delivering accurate and minimal delay visual stimuli. This system also provides for versatile control over visual stimuli in animals that are able to move freely.
- Surgical Simulation Tools: Hunt et al. (2020) propose a VR-based application designed to enhance the educational experience of veterinary students by providing a minimally interactive platform for stereoscopic viewing of surgical videos. This application aims to help students in preparing for their first canine sterilisation surgery. Additionally, the Virtual Cow Dissection tool developed by Lili & NorAzura (2012) uses realistic 3D modelling and collision detection to simulate the dissection of a cow. This system incorporates behavioural deformation modelling to accurately represent the anatomy and physical properties of the cow's neck and internal structures, improving the understanding of bovine anatomy without the need for real specimens.
- Virtual Laboratory Environments: The Virtual Experimentation System developed by Wu (2009) is based on the VR Modelling Language (VRML) and enables the creation of interactive learning environments for biological sciences. It provides immersive scenarios for various educational needs and supports remote collaboration among learners, enhancing the understanding of biological concepts without the need for live animals. Froguts is specifically designed for virtual dissection, allowing students to dissect frogs and other animals in a virtual laboratory. The software provides an interactive and immersive dissection experience, replacing traditional dissection kits and fulfilling the replacement aspect of the 3Rs (Apat, 2019). ViSi, developed by Tang et al. (2020), offers an immersive experience by allowing students to explore high-quality 3D animal specimens. The virtual dissection software promotes an in-depth understanding of anatomy without the need for real samples, aligning with the principles of Replacement.
- Physiological Simulation Tools: Computerised Dissection Simulators developed by Abdullah (2010) and Predavec (2001) provide students with the opportunity to virtually manipulate animal specimens, practice dissection techniques, and explore anatomy with real-time deformation patterns and haptic feedback features. These tools reduce the need for live animals while maintaining high educational standards. The VR-based Dissection Simulator introduced by Vafai & Payandeh (2010) is distinguished by the use of haptic feedback, allowing users to simulate the dissection of an animal (e.g., a frog) using a force-return 3D haptic device. The tactile interaction enhances the learning experience and improves the understanding of anatomical structures without the use of live animals.
As part of this research work, we developed a VRLE for animal experimentation (a small mammal model). The main objective of this work was to offer teachers and students in the Biological Engineering Department of the Laval University Institute of Technology, France, an alternative VR method of learning the correct gestures while respecting the ethics rules (the 3Rs rule). We particularly worked on a pedagogical situation, which consists of placing a catheter in a canal, vein or artery in an anesthetized animal in several steps: (a) anaesthesia of the animal; (b) fixing the animal; (c) cutting (incising) the skin from the shoulder girdle to the base of the chin; (d) realising a tracheal catheter; etc. Before presenting the functionalities of the Virtual Environment for Animal experimentation (VEA), we first describe our proposition of virtual action model, and illustrate it with the example of a virtual cutting (incising) action.
A Virtual-Reality-Oriented Pedagogical Action Model
This section presents our proposed Virtual-Reality-Oriented Pedagogical Action Model, highlighting its broad applicability across various educational scenarios. Our model empowers educators to design immersive and interactive virtual learning environments tailored to specific pedagogical objectives. By providing a structured approach to map educational activities to virtual actions within a VR system, we enhance the learning experience through precise and engaging simulations. To describe a pedagogical situation, educators identify the virtual pedagogical activities and corresponding aims, then define the virtual actions learners must perform. A ‘virtual action’ refers to an interaction within the VR Learning Environment (VRLE) and its cause-and-effect relationships. In a VRLE, a virtual action connects the user to the pedagogical situation (Mellet-d’Huart, 2021). Our proposed action model coordinates educational activities performed by learners with actions in VR systems. This model serves as a reference for teachers' conception of educational activities and scripted development of scenario-based events within the virtual environment. Our proposed action model (Figure 1) coordinates educational activities performed by learners with actions in VR systems.
Each object has specific functions, such as the object it acts upon, positional changes, and the instrument used by the learner. These are described by semantic labels called thematic roles. Key roles include:
- INSTRUMENT: This is the tool by which the action is accomplished.
- OBJECT (also called the subject): This is the entity which experiences the effects of an action, or which is moved in a space by an action.
- SOURCE: This is the entity from which another entity comes or moves away.
- POSITION: This is a spatial reference indicating the position of a virtual object or of a user of the virtual environment.
Thus, for each action is defined by a thematic grid of these roles, e.g., Put: [OBJECT, POSITION], Cut: [OBJECT, INSTRUMENT].
VR Action Concept
The "VRAction" concept is the main entity of our proposal, characterised by a learner's pedagogical objective, a description of the virtual action, and steps to be followed by the learner to realise the action correctly. We group virtual actions into four categories, corresponding to the four types of Virtual-Based Pedagogies (VBPs) identified in Section 2. The core entity of our model is the VR action ("VRAction"), characterised by:
- Pedagogical Objective: The goal the learner aims to achieve.
- Description: Detailed steps to perform the action.
- Sequence: Some actions must follow a predefined order.
To further describe these actions, we use checkpoints, a concept from prior research (Djadja et al., 2019; Gil et al., 2014; Mahdi et al., 2019; Oubahssi & Mahdi, 2021). Checkpoints are:
- a) StartCheckpoint, which identifies the action-start;
- b) ProgressCheckpoint, which illustrates the action-progress;
- c) EndCheckpoint, which identifies the action-end; and
- d) FailCheckpoint, which reflects an unsuccessful action realised by the learner.
Checkpoints can be cubic or spherical, depending on instructional needs. VR actions also include parameters with specific values. For instance, in the activity "anaesthesia of the animal via intraperitoneal injection," the parameter for the action "injecting a liquid" might be the volume of the liquid in the "syringe object." These parameters can change the properties of each VR object. For example, exceeding the maximum volume can cause the syringe to spill its contents, thus acquiring the 'container' Property.
Figure 1: The Virtual-Reality-Oriented Pedagogical Action Model.
Figure 1 illustrates the VR action structure. VR actions consist of checkpoints and parameters linked by aggregation relationships. We also define other aggregation relationships between the “VRObject” class and the Animation and Property classes. The “Parameter” class uses the “Property” class and includes animations from the “Animation” class. The reflexive association of the “VRAction” class indicates hierarchical relationships between actions. Specifically:
- Action2 may have a parent action Action1, meaning Action2 will only be triggered when Action1 is validated.
- Similarly, Action3 will only be triggered when both Action2 and Action1 are validated.
- If Action1 does not have a parent action, it does not depend on trigger conditions and can be executed at any time during the educational activity.
VR Objects and Properties
The “VRObject” class describes two main types of objects within our framework: technical (raw) objects and pedagogical objects. Each type has distinct characteristics and roles, as detailed below.
- Raw Objects (Technical VR Objects): Also referred to as 3D objects or graphic objects. These objects serve as entities used to acquire knowledge.
- Pedagogical Objects: These are teaching resources or learning objects (Pernin & Lejeune, 2006). Represented as reusable educational entities that can be aggregated, stored, searched for, and reused in different learning environments (Wiley, 2002).
The development of these objects requires specific expertise. Each VR-oriented pedagogical object is created by a technical team and presented as a raw object with pedagogical and technical properties. Properties store values associated with these objects. These properties include:
- Common Properties: Shared by all objects (e.g., position, shape, colour).
- Specific Properties: Unique to a particular object or learning field (e.g., concentration, volume, and vaporisation temperature for a H2O-based solution).
An example of a VR object with shared and specific properties is an H2O-based solution, which has shared properties such as position (where the solution is placed in the virtual laboratory), shape (usually taking the form of the container like a beaker or flask), and colour (clear or blue, depending on the context), as well as specific properties including concentration (the amount of solute dissolved in the solvent), volume (the quantity of the solution present), and vaporisation temperature (the temperature at which the solution transitions from liquid to gas).
VR Objects Dynamic Behaviour and Animations
The dynamic behaviour of virtual reality objects within our pedagogical framework is facilitated by the “Animation” class. This class is pivotal in enabling the representation and interaction of raw objects in a virtual environment by defining the parameters that govern their behaviour under various conditions. The animations specify the values of an object's properties in response to the actions performed either directly on the object or within the surrounding virtual environment. To accurately simulate real-world physics and interactions, each raw object is assigned specific technical properties. These properties include, but are not limited to:
- Weight: The gravitational force acting on the object.
- Position: The spatial coordinates of the object within the virtual environment.
- Shape: The geometrical form of the object, which may influence its behaviour during interactions.
- Colour: The visual attribute defining the object’s appearance.
For example, a cube representing a raw object must have the technical properties of weight and position. If the cube is subjected to a simulated drop, the animation will show the cube falling and possibly deforming on impact, demonstrating physical principles such as gravity and material deformation. In addition to technical properties, raw objects can be given pedagogical properties to facilitate learning objectives. These properties are aligned with specific pedagogical objectives and contextual applications. For example
- Gravitation: A property used in educational scenarios to teach concepts related to gravity.
- Elasticity: Demonstrates material properties relevant to physics and engineering courses.
By incorporating educational properties, raw objects can serve dual purposes: illustrating scientific principles and enhancing pedagogical effectiveness through interactive simulations. To implement and manage animations, each VR object is associated with a JSON-based model that defines the parameters and behaviours of the object. The JSON model includes:
- Identifier: A unique identifier for the animation.
- Name: A descriptive name for the animation.
- Description: A detailed explanation of the animation’s purpose and behavior.
- Compatible Actions: A list of actions that can trigger the animation.
For example, in a ‘cut’ action scenario, the corresponding animation will visually depict the opening of the skin. Such animations are predefined within the JSON model (Figure 2) and are flagged with checkpoints to ensure accurate and timely activation (Figure 3).
Animation Triggers, Rules and Conditions
Checkpoints play a critical role in the animation framework. They serve as reference points that trigger specific animations when certain conditions are met. As indicated before, when a learner interacts with the virtual environment and reaches a checkpoint, the corresponding animation is activated. The “Animation” class encompasses a set of rules and conditions that govern the dynamic behaviour of the raw objects. Each animation is associated with:
- Trigger Variable: A condition or event that initiates the animation.
- Animation Type: The specific category or nature of the animation (e.g., movement, transformation).
- Boolean Condition: A logical statement determining whether the animation continues or stops when a checkpoint is passed.
These rules ensure that animations are contextually relevant and accurately reflect the intended pedagogical outcomes. For example, a falling cube animation might be triggered by the learner’s action of releasing the cube, with the animation type specifying a fall and deformation sequence, and a Boolean condition stopping the animation when the cube hits the ground. The integration of animations into educational scenarios is designed to enhance the learning experience by providing interactive and visually engaging simulations. By leveraging the detailed properties and behaviours defined in the JSON models, educators can create dynamic and responsive virtual environments that support a wide range of pedagogical objectives.
Animators and Animation Control
Animators are crucial components in the control and execution of animations within the virtual environment. Each animation is managed by an animator, which dictates the following elements:
- Specific Object: The VR object associated with the animation.
- Trigger Variable: The variable or event that initiates the animation.
- Animation Type: The nature of the animation, such as movement, transformation, or interaction.
- Boolean Condition: A condition that determines whether the animation continues or stops when the learner passes a checkpoint.
The animator ensures that animations are executed smoothly and in accordance with the predefined rules. For example, an animator controlling the ‘cut’ action would manage the sequence of animations depicting the skin opening, ensuring that the animation starts, progresses, and ends correctly based on the learner's interactions.
Virtual Environment for Animal Experimentation
Our environment was developed using a user-centred design (UCD) approach, which prioritises the needs and goals of the end-users (Salinas et al., 2020). We worked closely with biology teachers to understand their requirements and needs for teaching animal experimentation education. They expressed their needs and explained the required learning situations and experimental protocols, which informed our design decisions.
Throughout the development process, teachers tested the system periodically, providing valuable feedback on the system's accuracy and realism in mimicking real-world laboratory protocols and procedures. They shared their insights on whether the system accurately replicated the experimental protocol and identified areas that required modification. This collaborative approach ensured that our VR environment was designed with the needs of trainers and learners in mind. In addition, students participated in some of the testing sessions, providing further insight into the usability and effectiveness of the system. Their input helped to refine the user interface and interaction mechanisms to ensure an engaging and effective learning experience. Furthermore, the UCD approach was not limited to the design of the platform but extended to the development of our virtual pedagogical action model. By collaborating with educators, we ensured that the virtual pedagogical activities and corresponding pedagogical aims were aligned with the actual needs of the teaching scenarios.
To construct the model, it was necessary to develop a general abstraction of virtual actions that could be applied to various educational contexts. Although the teachers were not VR experts, their feedback played a significant role in shaping this process. Their insights were crucial in ensuring that the theoretical model was both pedagogically sound and intuitive. Subsequently, we converted their requirements into precise technical specifications for the model, ensuring that each component of the model aligns with various pedagogical objectives.
Example of a Pedagogical Activity: Animal Experiment Context
In this section, we describe an example of virtual pedagogical situation involving the placement of a catheter in a canal in an anesthetised animal (a rat). The learner is instructed to perform the ‘cut’ action. The objective of this action is to incise the animal’s skin from the shoulder girdle to the chin base. As illustrated in Table 1, two virtual objects are required, the scissors instrument (used to realise the cutting action) and the rat object (on which the action is performed). The action does not require any parameters to be specified, which justifies the null value given to the ‘parameters’ element. However, the ‘cut’ action requires checkpoints to delimit the cutting areas, as illustrated in Figure 4.
In this example, the checkpoint type is cubic. In the case where the learner successfully completes the cut action, an animation is triggered to show that the action has been successfully completed. One or more ‘Fail’ checkpoints are specified for actions that are not performed correctly. Figure 4 illustrates the four checkpoints used for the ‘cut’ action.
Cut Action | ||||
Name | Cut | |||
Objective | Know-how to open the animal skin | |||
Description | The learner must cut the animal skin | |||
Instruction | Start cutting from the chest strap to the base of the chin | |||
Objects | Object1 | Rat | ||
Object2 | Scissors | |||
Checkpoints | Checkpoint 1 | Form | Cube | |
Type | Starting | |||
Animations | Animation 1 | Opening of the skin1 | ||
Checkpoint 2 | Form | Cube | ||
Type | Progress | |||
Animations | Animation 1 | Opening / closing the scissor | ||
Animation 2 | blood flow | |||
Checkpoint 3 | Form | Cube | ||
Type | End | |||
Animations | Animation 1 | Opening of the skin 2 | ||
Checkpoint 4 | Form | Cube | ||
Type | Failure | |||
Animations | Animation 1 | Haemorrhage | ||
Animation 2 | Cardiac arrest | |||
Checkpoint 5 | Form | Cube | ||
Type | Failure | |||
Animations | Animation 1 | Haemorrhage | ||
Animation 2 | Cardiac arrest | |||
Parameters | Null |
VEA: Functionalities
To design, develop and evaluate our proposed solution (VEA application), we used an approach based on an agile multi-step iterative process. The proposed VRLE provides learners with functionalities to perform a variety of pedagogical activities in a virtual world as part of their curriculum and instruction (both in the laboratory and in a real context). It also allows the teacher to configure specific pedagogical activities depending on the learners’ profiles. We describe in the following the main VEA application features.
- Connecting: This feature allows the user (teacher or learner) to connect to the VR environment. To do this, the user must use a VR headset with a VR controller, which allows them to manipulate the environment and its objects.
- Moving within the virtual laboratory: Once the user is immersed in the VR environment, he/she is able to move freely. The user can also use the teleportation function to navigate between different locations.
- Visualising and manipulating virtual objects: With the two VR controllers, the learner can manipulate the different laboratory objects, and in particular the tools that are required to realise the practical work on animal experimentation.
- Using the practical worksheet: A worksheet detailing the sequence of activities to be performed has been provided to the learners. The learner can refer to it at any point to comprehend the work expected
- Time spent: During the design phase, the teacher can associate a specific duration to each pedagogical activity or virtual action. This feature allows the learner to be reminded of the time spent on each activity and the time remaining regarding the allocated time.
- Access to educational resources: The teacher can upload pedagogical material to the learners in form of a short video. This feature may assist the learner in the achievement of the practical assignments.
- Validation of realised actions: This functionality indicates (in real time) whether the action performed by the learner is correct or not based on the concept of checkpoints described in the previous sections.
Figure 5 illustrates the main interface of VEA. The overall features of VEA can be previewed on the demo video available on the link below
VEA: Experiment and Evaluation
In this section, we first provide a description of the experiment; we then explain the methodology used to evaluate the VR-based environment, and finally present and discuss the results.
Experimental Design
This study employed a single experimental condition, which was tested by all 147 participants. The experimental condition involved a usability study where participants used the VR-based environment (VEA) to complete a series of tasks. The study was conducted as part of a Learning and Assessment Situation (LAS) with the objectives of:
- Understanding the environment of an animal experimentation room.
- Understanding the live anesthetized rat model and the equipment necessary for the implementation of an experimental procedure.
- Being trained in the performance of the technical gestures of the tracheostomy by following the operating protocol.
Methodology
A usability study was carried out to obtain user feedback on the VEA system and to ensure that the proposed functionalities were adequate for users’ needs. The experiment was conducted with 147 first-year Biological Engineering students at the Laval University Institute of Technology, France, who were supervised by one tutor. Students were chosen as the main participants in this trial since they are the ultimate end users of our system, and their feedback and opinions about VEA are essential. At the methodological level, we defined evaluation criteria based on the definitions of utility and usability proposed by Issa & Isaías (2022). According to these authors, utility refers to the objectives an artefact enables a user to achieve in specific situations, while usability pertains to the ease of using a device, including its interface, navigation, and coherence with the objective. In our case, we consider that VEA:
- is useful if it (1) facilitates the implementation of the experimental procedures of physiological studies; (2) provides all the elements necessary to acquire the basic experimental gestures on the laboratory animal; and (3) allows the students to gain a better grasp of the (basic) technical gestures before applying them to a real model.
- is usable if it (1) allows for simple and easy navigation; (2) provides a movement and interaction interface (in a virtual environment) with all the functionalities needed to simplify its use; and (3) ensures the coherence of the production process of the various virtual actions.
Protocol and Experimental Environment
As mentioned above, this experiment was conducted as part of a LAS. Our evaluation process was designed to take place in three steps, as follows:
- Step 1 - Preparation: In this step, the teacher described to the students the objectives of their learning situation (individual tracheotomy training based on VR) and it’s the different steps involved. She also presented the materials and equipment used during this experiment (two computers, two VR headsets headset and controllers, and Steam VR 3D software). At this stage, the students also studied the experimental protocol (by reading handouts and viewing a video). They also benefited from training (10 min), which introduced them to the manipulation of the VR tools.
- Step 2 - Individual tracheotomy training based on VR: The aim of this step was for the students to carry out practical work according to the instructions provided in the preparation phase. The objective of the training in animal experimentation is to sensitise students to this field and allow for the acquisition of knowledge (regulations, animal welfare, alternative methods, etc.) needed to implement experimental procedures as part of physiological and pharmacological studies in a professional environment.
- Step 3 - Data collection and results: In this step, we asked students to complete an online questionnaire comprising 24 questions
3 . These were made up of 22 closed-ended questions, rated using a six-point Likert scale (totally disagree, disagree, somewhat disagree, somewhat agree, agree, totally agree) and two open-ended questions. The first set of closed-ended questions collected data on the participants and their level of expertise in VR. The second set investigated whether the VEA tool allowed participants to correctly perform their virtual practical work (i.e. to move easily in the 3D environment, interact easily with the different virtual educational objects, and correctly perform the different virtual actions). The third set was related to the measurement of the usability of the VEA tool; for this part, we used a System Usability Scale (SUS) questionnaire (Bangor et al., 2008). In the open questions, the students were asked to give positive or negative comments about the tool and the various virtual actions carried out, and to report any bugs in the software. This feedback also allowed the teacher to carry out a formative evaluation of the critical learning that took place during the virtual practical work.
Experimental Results
As mentioned above, the aim of the experiment was to evaluate the usability and utility of our VEA tool. For the usability measurements, we applied an SUS questionnaire, which is a popular and effective tool for assessing the usability of various systems (Bangor et al., 2008). A total of 146 students were involved in this experiment, which is an adequate sample size to detect any major problems with usability (Virzi, 1992). SUS is based on 10 closed-ended questions, where each item is rated on a five-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree). The questions are phrased as positive (even numbered questions) and negative statements (odd numbered questions). Users may be confronted with a misunderstanding of the negative statements of the questionnaire. This may have an impact on the calculation of the SUS score. Pre-processing of participant responses can help avoid these errors. For that purpose, we refer to McLellan et al (2012), who identified as incorrect all responses that scored higher than three for all negative statements. After pre-processing 9 participations were withdrawn from the 146 received responses.
The average SUS score for all participants was 79, with a SD of 8,74. In accordance with the rule for interpreting SUS questionnaire(Bangor et al., 2008), scores of above 70 were acceptable, scores of between 50 and 70 were marginally acceptable, and scores of less than 50 were considered unacceptable . Using this scale, an average SUS score of 79 indicates that our tool is acceptable, and results of between good and excellent were obtained for the notation (Figures 7 and 8). This score also corresponds to the 86th percentile, according to the standardisation presented by Sauro and Lewis (2011).
Figure 8: VEA Tool Usability: SUS Score (B)
It is interesting to look at the individual results for some of the basic questions of the SUS questionnaire (Table 2). We note that for all the positive statements, we have a median >= 4, which indicates that all the participants consider that our system is easy to use and could be easily used by any individual. We also note that for all the negative statements, we have a median <= 2. This indicates that overall, the participants disagreed with these statements. The responses to Question 10 concerning “the need to master VR tools before feeling familiar with the VEA tool” were neutral, and this finding may be explained by the context of using of VR tools (VR headset, joysticks/controllers).
Median | STD | Mean | |
---|---|---|---|
Q1 | 4 | 0.76 | 4.37 |
Q2 | 2 | 0.77 | 1.69 |
Q3 | 4 | 0.74 | 0.74 |
Q4 | 2 | 0.93 | 2.30 |
Q5 | 4 | 0.72 | 3.93 |
Q6 | 2 | 0.76 | 1.70 |
Q7 | 5 | 0.67 | 4.56 |
Q8 | 1 | 0.47 | 1.25 |
Q9 | 4 | 0.77 | 4.34 |
Q10 | 3 | 1.28 | 3.18 |
Table 2: Usability of the VEA tool: Individual analysis of each question
The aims of the second part of the questionnaire was to determine whether the tool allowed the students to correctly perform their practical work in virtual mode, and to evaluate the movements and interactions in the 3D environment and the different virtual actions that were performed. This part also assessed whether the students were satisfied with the functionalities of the tool. The final aim was to evaluate the potential of the tool in terms of the use of VR in the context of animal experimentation. A total of 137 of the 146 students indicated that they were able to move easily in the 3D virtual environment and to use teleportation.
We asked the participants whether the interactions with the 3D objects (rat, scissors, pliers, etc.) were intuitive and whether the objects were well placed in the environment, and 68 of the participants agreed. We also observed that 140 students passed the different steps of their virtual practical work. However, 88 students had difficulty in realising the action that allowed the wire to be placed around the rat's trachea, and this is one aspect that needs to be improved in the next version of the tool. We also found that students who had already used a virtual headset easily passed the different steps of the practical work (7 min), with the others taking longer (12 min). 139 students were satisfied with the tool, and one reported problem with the use of the virtual headset. Overall, the students were very satisfied with the use of the VEA environment. They stated that it would allow them to: (1) master the protocol; (2) learn the various technical gestures before carrying out practical work; (3) have more confidence to do practical work on a real animal; and (4) train on the overall experiment process before conducting real practical work. They also made suggestions and recommendations relating to the improvement of the application's functionalities and the animations used for the various virtual actions. The proposals for improvement included:
- Offering students two levels of difficulty (with and without assistance);
- Allowing students to carry out their activities in groups (i.e. introducing the notion of collaborative educational activity in a VR context);
- Providing the ability to expand the areas used to perform a virtual task;
- Improving the animations of the virtual actions already developed in the VEA tool;
- Offering the possibility of using several rat models.
Conclusion
VR offers users new experiences through increasingly effective methods of interaction and immersion, which are of great interest in the field of education (Chen, 2006). The collaboration between our IEIAH-LIUM team and the educational team of the Biological Engineering Department (Laval University Institute of Technology, France) made it possible to propose the first alternative VR-based solutions to the problem of using of the animal model for scientific purposes. To conduct this work, we adopted a co-design approach in which several teachers participated in the conception of different VEA concepts through an iterative, user-centred approach. The contribution proposed in this project provides important elements for our work within the applicative area of VR in the educational field. The objectives of this project are to explore research questions related to the design and operationalisation of educational situations in a VR context, and to provide models and tools that could help teachers to design, reuse and deploy their educational scenarios in VR learning environments. (VRLEs) (Mahdi et al., 2019b). In addition, we aim to propose technical and methodological solutions that are both adaptable and reusable; in other words, those solutions can be used in various virtual environments, regardless of the field or the type of pedagogical activity to be performed.
The design and development process of a VRLE needs to consider the pedagogical requirements of teachers to fulfil their needs (Mahdi et al, 2019a; Mahdi et al., 2019b). The model was established through a collaboration between our research team and biology professors at the IUT. The educators assisted us in understanding the stages of animal experimentation, which was crucial for accurately modelling these processes within the virtual environment. Their insights ensured that the pedagogical objectives were met and that the virtual actions reflected the real-world procedures effectively. Our interdisciplinary approach allowed us to integrate detailed biological protocols into the VR framework, enabling students to engage in immersive, hands-on learning experiences that mirror actual laboratory practices. The professors' expertise in animal experimentation informed the development of specific virtual actions, checkpoints, and parameters that aligned with educational goals and technical requirements. This collaboration highlights the framework's applicability across various educational scenarios, demonstrating its potential to enhance learning experiences in diverse fields beyond biology. By working closely with subject matter experts, we ensured that the VR learning environment is both pedagogically sound and technically robust, providing a comprehensive tool for educators to design interactive and effective virtual learning experiences. The experiments with the VEA prototype carried out by the first-year Biological Engineering students of the Laval University Institute of Technology demonstrated the importance and usefulness of VR in this type of educational context. These experiments also led to several proposals for improvements to the VEA tool, at both the artefact level (functionalities and architecture) and at the scientific level (modelling of VR-oriented educational situations and their operationalisation). In future work, we intend to improve the current prototype (the VEA tool) by providing a new virtual platform for animal experimentation, with the aim of fulfilling the emerging requirements of teachers/students. In summary, our current efforts have provided a solid foundation for the integration of Virtual Reality (VR) technology into educational environments. However, future initiatives will focus on the integration of gesture training and haptic feedback to improve understanding of practical procedures. In addition, future research will focus on establishing content validity through expert judgement and assessing the transferability of skills acquired in virtual reality to real-world applications. The aim of these efforts is to improve the field of education through the use of virtual reality and to further validate the effectiveness of our teaching method in preparing students for real-world professional experiences.
Acknowledgements
The current work is supported by the Laval University Institute of Technology, France. The authors want to thank all the persons who have contributed to this project.
Notes
- 1. https://ec.europa.eu/environment/chemicals/lab_animals/3r/alternative_en.htm
- 2. http://perso.univ-lemans.fr/~loubah/videos/VideoRatPresentation.mp4
- 3. https://lium-cloud.univ-lemans.fr/index.php/s/L3DB5rF899BaeGA
References
- Abdullah, L. N. (2010). Virtual Animal Slaughtering and Dissection via Global Navigation Elements. In Proceedings of the 2010 Second International Conference on Computer Research and Development (pp. 182-185)
- Apat, J. (2019). Froguts Virtual Dissection: Alternative to Physical Dissection for Biology Basic Education.
- Bakki, A., Oubahssi, L., George, S., & Cherkaoui, C. (2019). MOOCAT: A visual authoring tool in the cMOOC context. Education and Information Technologies, 24(2), 1185–1209. doi:10.1007/s10639-018-9807-2
- Bangor, A., Kortum, P. T., & Miller, J. T. (2008). An empirical evaluation of the system usability scale. International Journal of Human-Computer Interaction, 24(6), 574–594. doi:10.1080/10447310802205776
- Bielsa, V. (2021). Virtual reality simulation in plastic surgery training. Literature review.. Journal of plastic, reconstructive & aesthetic surgery : JPRAS.
- Chen, C. J. (2006). The design, development and evaluation of a virtual reality based learning environment. Australasian Journal of Educational Technology, 22(1).
- Coquillart, S., Fuchs, P., Grosjean, J., Hachet, M., Moreau, G., & Guitton, P. (2011). Interaction technique for virtual behavioural primitives. Virtual Reality: Concepts And Technologies, 247-291.
- Debose, K. (2020). Virtual Anatomy: expanding veterinary student learning. Journal of the Medical Library Association : JMLA, 108, 647
- Dede, C. (2009). Immersive Interfaces for Engagement and Learning. Science, 323(5910), 66–69.
- Djadja, D. J. D., Hamon, L., & George, S. (2019). Modeling and Evaluating of Human 3d+ t Activities in Virtual Environment. European Conference on Technology Enhanced Learning, 696–700.
- Fuchs, P. (2011). Theoretical and pragmatic approach to virtual reality. Virtual Reality, 11-44.
- Fuchs, P. (2017). Concepts of virtual reality. Virtual Reality Headsets, 9–22.
- Fuchs, P. (2018). The Challenges and Risks of Democratization of VR-AR. Virtual Reality and Augmented Reality: Myths and Realities, 289–301.
- Gil, A. M., de Barros Mendonça, P. R., & de Melo Monteiro, B. G. (2014). Unity3D-based Neuro-Evolutive Architecture to Simulate Player.
- Gomez, L. I. (2020). Immersive Virtual Reality for Learning Experiences. Lecture Notes in Educational Technology, 183–198.
- Grosso, N., & Sirota, A. (2019). Ratcave: A 3D graphics python package for cognitive psychology experiments. Behavior Research Methods, 51, 2085
- Guo, Z., Zhou, D., Zhou, Q., Zhang, X., Geng, J., Zeng, S., Lv, C., & Hao, A. (2020). Applications of virtual reality in maintenance during the industrial product lifecycle: A systematic review. Journal of Manufacturing Systems, 56, 525-538.
- Hamilton, D., McKechnie, J., Edgerton, E., & Wilson, C. (2020). Immersive virtual reality as a pedagogical tool in education: a systematic literature review of quantitative learning outcomes and experimental design. Journal of Computers in Education, 8, 1-32.
- Herrington, J., Reeves, T. C., & Oliver, R. (2013). Authentic Learning Environments. Handbook of Research on Educational Communications and Technology, 401–412.
- Hunt, J. A., Heydenburg, M., Anderson, S. L., & Thompson, R. R. (2020). Does virtual reality training improve veterinary students’ first canine surgical performance? Veterinary Record, 186(17), 562–562.
- Husain, A., Meenakshi, D. U., Ahmad, A., Shrivastava, N., & Khan, S. A. (2023). A Review on Alternative Methods to Experimental Animals in Biological Testing: Recent Advancement and Current Strategies. Journal of Pharmacy and Bioallied Sciences, 15(4), 165–171.
- Issa, T., & Isaías, P. (2022). Usability and Human–Computer Interaction (HCI).
- Johnson-Glenberg, M. C. (2018). Immersive VR and Education: Embodied Design Principles That Include Gesture and Hand Controls. Frontiers in Robotics and AI, 5.
- Johnston, E., Olivas, G., Steele, P., Smith, C., & Bailey, L. (2017). Exploring Pedagogical Foundations of Existing Virtual Reality Educational Applications: A Content Analysis Study. Journal of Educational Technology Systems, 46, 414
- Kaupert, U., Thurley, K., Frei, K., Bagorda, F., Schatz, A., Tocker, G., Rapoport, S., Derdikman, D., & Winter, Y. (2017). Spatial cognition in a virtual reality home-cage extension for freely moving rodents. Journal of Neurophysiology, 117(4), 1736–1748
- Ketelhut, D. J., & Nelson, B. C. (2021). Virtual Learning Environments.
- Krajčovič, M., Gabajová, G., Matys, M., Grznár, P., Dulina, Ľ., & Kohár, R. (2021). 3D Interactive Learning Environment as a Tool for Knowledge Transfer and Retention. Sustainability, 13(14), 7916.
- Kugurakova, V. V., Golovanova, I. I., Kabardov, M. K., Kosheleva, Y. P., Koroleva, I. G., & Sokolova, N. L. (2023). Scenario approach for training classroom management in virtual reality. Online Journal of Communication and Media Technologies, 13(3), e202328.
- Lemos, M., Bell, L., Deutsch, S., Zieglowski, L., Ernst, L., Fink, D., Tolba, R., Bleilevens, C., & Steitz, J. (2022). Virtual Reality in Biomedical Education in the sense of the 3Rs. Laboratory Animals, 57, 160
- Lili, N. A., & Norazura, M. A. (2012). Behavioral deformation model for virtual animal dissection. In 2012 8th International Conference on Information Science and Digital Content Technology (ICIDT2012) (Vol. 3,
- Mahdi, O., Oubahssi, L., Piau-Toffolon, C., & Iksal, S. (2019a). Assistance to Scenarisation of VR-Oriented Pedagogical Activities: Models and Tools.
- Mahdi, O., Oubahssi, L., Piau-Toffolon, C., & Iksal, S. (2019b). Towards an editor for VR-oriented educational scenarios. European Conference on Technology Enhanced Learning, 756–760.
- Manciocco, A., Chiarotti, F., Vitale, A., Calamandrei, G., Laviola, G., & Alleva, E. (2009). The application of Russell and Burch 3R principle in rodent models of neurodegenerative disease: The case of Parkinson’s disease. Neuroscience & Biobehavioral Reviews, 33, 18-32.
- Mao, R., Lan, L., Kay, J., Lohre, R., Ayeni, O., Goel, D., & Sa, D. (2021). Immersive Virtual Reality for Surgical Training: A Systematic Review.. The Journal of surgical research, 268, 40-58 .
- Marion, N. (2010). Modélisation de scénarios pédagogiques pour les environnements de réalité virtuelle d’apprentissage humain. http://www.theses.fr/2010BRES2009
- Marougkas, A., Troussas, C., Krouska, A., & Sgouropoulou, C. (2023). Virtual Reality in Education: A Review of Learning Theories, Approaches and Methodologies for the Last Decade. Electronics, 12(13), 2832.
- Martín-Gutiérrez, J., Mora, C. E., Añorbe-Díaz, B., & González-Marrero, A. (2017). Virtual technologies trends in education. Eurasia Journal of Mathematics, Science and Technology Education, 13
- McIntosh, V. (2022). Dialing up the danger: Virtual reality for the simulation of risk. Frontiers in Virtual Reality, 3.
- Mclellan, S., Muddimer, A., & Peres, S. C. (2012). The Effect of Experience on System Usability Scale Ratings. Journal of Usability Studies, 7(2), 56–67.
- Mellet-d’Huart, D. (2021). Learning in Virtual Environments. Advances in Educational Technologies and Instructional Design, 1–19.
- Mikropoulos, T. A., & Natsis, A. (2011). Educational virtual environments: A ten-year review of empirical research (1999--2009). Computers \& Education, 56(3), 769–780.
- Ormandy, E., Schwab, J. C., Suiter, S., Green, N., Oakley, J., Osenkowski, P., & Sumner, C. (2022). Animal Dissection vs. Non-Animal Teaching Methods. The American Biology Teacher, 84(7), 399–404.
- Oubahssi, L., & Mahdi, O. (2021). VEA: A Virtual Environment for Animal experimentation. 2021 International Conference on Advanced Learning Technologies (ICALT), 422–424.
- Ouramdane, N., Otmane, S., & Mallem, M. (2009). Interaction 3D en Réalité Virtuelle-Etat de l’art. Revue Des Sciences et Technologies de l’Information-Série TSI: Technique et Science Informatiques, 28(8), 1017–1049.
- Patel, H., Stefani, O., Sharples, S., Hoffmann, H., Karaseitanidis, I., & Amditis, A. (2006). Human centred design of 3-D interaction devices to control virtual environments. International Journal of Human-Computer Studies, 64(3), 207–220.
- Pernin, J.-P., & Lejeune, A. (2006). Models for the re-use of learning scenarios. Imagining the Future for ICT and Education, IFIP Conference Proceedings, Ålesund, Norway.
- Predavec, M. (2001). Evaluation of E-Rat, a computer-based rat dissection, in terms of student learning outcomes. Journal of Biological Education, 35(2), 75–80.
- Richir, S., Fuchs, P., Lourdeaux, D., Millet, D., Buche, C., & Querrec, R. (2015). How to design compelling Virtual Reality or Augmented Reality experience? International Journal of Virtual Reality, 15, 35–47.
- Salinas, E., Cueva, R., Paz, F. (2020). A Systematic Review of User-Centered Design Techniques. In: Marcus, A., Rosenzweig, E. (
- Sauro, J., & Lewis, J. R. (2011). When designing usability questionnaires, does it hurt to be positive? Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2215–2224.
- Setyowati, R. R., Rochmat, S., Aman, A., & Nugroho, A. N. P. (2023). Virtual Reality on Contextual Learning during Covid-19 to Improve Students’ Learning Outcomes and Participation. International Journal of Instruction, 16(1), 173–190.
- Shudayfat, E., & Alsalhi, N. (2023). Science learning in 3D virtual environment multi-users online in basic education stage. Eurasia Journal of Mathematics, Science and Technology Education.
- Stowers, J., Hofbauer, M., Bastien, R., Griessner, J., Higgins, P., Farooqui, S., Fischer, R., Nowikovsky, K., Haubensak, W., Couzin, I., Tessmar-Raible, K., & Straw, A. (2017). Virtual Reality for Freely Moving Animals. Nature methods, 14, 995
- Sumardani, D., & Lin, C.-H. (2023). Investigating the Factor that Influences the Implementation of Virtual Reality for Science Learning.
- Tadjine, Z., Oubahssi, L., Piau-Toffolon, C., & Iksal, S. (2015). A process using ontology to automate the operationalization of pattern-based learning scenarios. International Conference on Computer Supported Education, 444–461.
- Tang, F. M. K., Lee, R. M. F., Szeto, R. H. L., Cheung, J. C. T., & Ngan, O. M. Y. (2020). Experiential learning with virtual reality: animal handling training. Innovation and Education, 2(1).
- Vafai, N. M., & Payandeh, S. (2010). Toward the development of interactive virtual dissection with haptic feedback. Virtual Reality, 14(2), 85–103.
- Verderio, P., Lecchi, M., Ciniselli, C. M., Shishmani, B., Apolone, G., & Manenti, G. (2023). 3Rs Principle and Legislative Decrees to Achieve High Standard of Animal Research. Animals, 13(2), 277.
- Virzi, R. A. (1992). Refining the test phase of usability evaluation: How many subjects is enough? Human Factors, 34(4), 457–468.
- Vygotsky, L. S. (1978). Mind in Society: The Development of Higher Psychological Processes.
- Wagner, C., & Liu, L. (2021). Creating Immersive Learning Experiences: A Pedagogical Design Perspective. Creative and Collaborative Learning through Immersion, 71–87.
- Wermann, J., & Pohn, B. (2022). VR Training for Laboratory Environments. 2022 International Conference on Software, Telecommunications and Computer Networks (SoftCOM)
- Wiley, D. A., & others. (2002). The instructional use of learning objects (Vol.
- Wu, H. (2009). Research of Virtual Experiment System Based on VRML.
- Yu, Z., & Xu, W. (2022). A meta analysis and systematic review of the effect of virtual reality technology on users' learning outcomes. Computer applications in engineering education, 30(5), 1470-1484.
- Zhai, S., Buxton, W., & Milgram, P. (1994). The “silk cursor” investigating transparency for 3d target acquisition. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 459–464.
- Zhang, L., & A. Bowman, D. (2022). Exploring Effect of Level of Storytelling Richness on Science Learning in Interactive and Immersive Virtual Reality.
Attachments
No supporting information for this articleArticle statistics
Views: 133
Downloads
PDF: 54
XML: 2