Skip to main content


IAC Virtual Reality Research with TAMU College of Nursing

(Continued from Home Page)

The research teams have created multiple Virtual Reality nursing education applications. Virtual reality simulations provide an extra learning tool to support our nursing students as they continue to develop their skills and advance their knowledge in providing care to patients.

Antepartum VR is a 360 VR application for nursing students to learn and practice the Antepartum Assessment (the prenatal care for a mother between conception and delivery) process. Incorporating these simulations as a supplemental teaching tool provides nursing students with additional opportunities to grow their confidence as they advance in their education. This application has been tested in Dr. Wells-Beede’s class. 

SBIRT VR application is a collaborative virtual reality application that can connect student and instructor users together simultaneously in a virtual exam room (Figure 2). There are two applications: an administrative application that is controlled by an instructor using any personal computer, and a VR application built for the Oculus Quest 2 that the student uses. These two applications work together to create a simulated experience that allows both an instructor and a student user to run through an SBIRT scenario together in a shared, virtual space. 

Forensic Nursing VR application was developed in collaboration with Dr. Stacey Mitchelle, director of the Center of Excellence in Forensic Nursing at Texas A&M University. This Application provides nursing students or nursing professionals a dynamic interactive experience in a fully immersive virtual environment. In the simulation, a user learn how to proceed the evidence collecting process from a victims of violence—those who endure sexual assault, intimate partner violence, neglect or any other form of intentional injury. Through this application, nursing students will be trained to treat these victims as patients, keeping them and all they have endured at the center of that treatment.  

In the process of developing these applications, multiple visualization students have participated in 3D asset creation, virtual reality development and testing. 




Upwell: Performative Immersion

Upwell is a mixed reality performance that was created from an ongoing collaboration between the Department of Visualization and the Dance Program. Upwell allows audience members to explore the virtual and physical worlds with two dancers. The environment provokes the feeling of being underwater. A dancer with a conventional VR head-mounted display and wearable controllers can navigate around a room scale virtual reality setup and interact with dynamic visual and sound elements. Since the dancer wears custom-made wearable controllers on the palms, she can make intricate gestures to develop direct relationships with bioluminescent particles in the virtual water. The other dancer only interacts with the visuals created by the VR dancer without realizing the virtual world.

Upwell consists of both virtual reality components and dance. The virtual reality custom controllers and environment were created by former students Michael Bruner and Nathan Ayres, along with faculty member Jinsil Hwaryoung Seo from the Department of Visualization. The dance choreography was created by former students Sarah Behseresht, Hannah Jenke, and Kali Taft under the guidance of faculty members Alexandra Pooley and Christine Bergeron from the Dance program.

After students from both departments graduated, the project has been maintained and revised by Austin Payne (VIZ), VR developer. Current students Ashlyn Thompson and Kelsey Clark (Dance) modified the choreography and have been practicing.  The project was recently accepted to the Art Track Performance program of the International Conference on Tangible, Embedded, and Embodied Interactions (TEI) that will be held in Tempe Center for the Arts. This is a highly competitive venue that focuses on incorporating innovative technology and arts. The newly formed team will present Upwell at the TEI Performance show. This project provides a unique experience to the dancers as well as audience members, sharing virtual experiences through a projected illusion. This is a new innovation  that allows a meaningful practice in the dance and interactive arts.  

Upwell has been supported by AVPA (Academy of Visual and Performing Arts) and IAC (Institute for Applied Creativity) at Texas A&M University.

Click here to see the project webpage. 




FOVI 3D: Technical Deep Dive

With Thomas Burnett, CTO; and Viz graduates Christopher Portales and Rathinavel Sankaralingam 

Thursday, January 24, at 1pm - 4pm, Wright Gallery Lecture Room (ARCA 212)

This technical deep dive presentation will review:

Human binocular vision and acuity, and the accompanying 3D retinal processing of the human eye and brain are specifically designed to promote situational awareness and understanding in the natural 3D world.  The ability to resolve depth within a scene whether natural or artificial improves our spatial understanding of the scene and as a result reduces the cognitive load accompanying the analysis and collaboration on complex tasks.

A light-field display projects 3D imagery that is visible to the unaided eye (without glasses or head tracking) and allows for perspective correct visualization within the display’s projection volume.  Binocular disparity, occlusion, specular highlights and gradient shading, and other expected depth cues are correct from the viewer’s perspective as in the natural real-world light-field. 

Light-field displays are no longer a science fiction concept and a few companies are producing impressive light-field display prototypes.   This presentation will review the application agnostic light-field display architecture being developed at FoVI3D.    In addition, the presentation will discuss the significance, properties and characteristics of light-field displays as well as the challenge of the generation and distribution of radiance image rendering.

For the past 15 years, Thomas Burnett has been developing static and dynamic light-field display solutions.  While at Zebra Imaging, Thomas was a key contributor in the development of static light-field topographic maps for use by the Department of Defense in Iraq and Afghanistan.  He was the computation architect for the DARPA Urban Photonic Sandtable Display (UPSD) program which produced several large-area, light-field display prototypes for human factors testing and research. 

More recently, Thomas launched a new light-field display development program at FoVI3D where he serves as CTO.  FoVI3D is developing a next-generation light-field display architecture and display prototype to further socialize the cognitive benefit of spatially accurate 3D aerial imagery.


Click here to view the presentation slides.




InNervate AR: Creative Anatomy Collective  

An ongoing collaboration between Visualization and Anatomy students dynamically interacts with canine anatomy using Augmented Reality.

Created by Margaret Cook, graduate student in the Visualization department, InNervate AR is a mobile application for undergraduate canine anatomy education. Margaret pushes the boundaries of anatomy education by offering students a set of dynamic interactions to demonstrate relationships between the nerves and muscles of the canine front leg.

A user can view the canine front leg on a mobile phone once a visual marker is scanned by the phone camera using InNervate AR. The user can then explore the bones, nerves, and muscle groups. A second module focuses on nerves of the canine front limb, usually only labelled in diagrams for students. When anatomy students are asked questions about the repercussions of damage to various places along a nerve’s length, they often have trouble mentally visualizing an answer. This mobile AR application offers the chance for students to view a “healthy” animation of the leg’s range of motion. Next, the user has the ability to choose where they want to physically cut a nerve, and then watch an animation demonstrate which muscles have lost the ability to move. Therefore, students can better visualize how ranges of muscle movement are changed and effected, based upon which nerve functions remain.           

Margaret and her research team aim to provide an engaging way for anatomy students to dynamically interact with anatomical content, and as a result, feel more confident in their clinical and critical thinking skills. This project is a collaboration between two colleges: Dr. Jinsil Hwaryoung Seo, Austin Payne and Michael Bruner of the Visualization department in the College of Architecture, and Dr. Michelle Pine of the Veterinary Integrative Biosciences in the College of Veterinary Medicine and Biomedical Sciences.