News

FoVI3D_img.png

FOVI 3D: Technical Deep Dive

Thursday, January 24, at 1pm - 4pm, Wright Gallery Lecture Room (ARCA 212)

With Thomas Burnett, CTO; and Viz graduates Christopher Portales and Rathinavel Sankaralingam 

This technical deep dive presentation will review:

Human binocular vision and acuity, and the accompanying 3D retinal processing of the human eye and brain are specifically designed to promote situational awareness and understanding in the natural 3D world.  The ability to resolve depth within a scene whether natural or artificial improves our spatial understanding of the scene and as a result reduces the cognitive load accompanying the analysis and collaboration on complex tasks.

A light-field display projects 3D imagery that is visible to the unaided eye (without glasses or head tracking) and allows for perspective correct visualization within the display’s projection volume.  Binocular disparity, occlusion, specular highlights and gradient shading, and other expected depth cues are correct from the viewer’s perspective as in the natural real-world light-field. 

Light-field displays are no longer a science fiction concept and a few companies are producing impressive light-field display prototypes.   This presentation will review the application agnostic light-field display architecture being developed at FoVI3D.    In addition, the presentation will discuss the significance, properties and characteristics of light-field displays as well as the challenge of the generation and distribution of radiance image rendering.

For the past 15 years, Thomas Burnett has been developing static and dynamic light-field display solutions.  While at Zebra Imaging, Thomas was a key contributor in the development of static light-field topographic maps for use by the Department of Defense in Iraq and Afghanistan.  He was the computation architect for the DARPA Urban Photonic Sandtable Display (UPSD) program which produced several large-area, light-field display prototypes for human factors testing and research. 

More recently, Thomas launched a new light-field display development program at FoVI3D where he serves as CTO.  FoVI3D is developing a next-generation light-field display architecture and display prototype to further socialize the cognitive benefit of spatially accurate 3D aerial imagery.

 

 

meg-cook.png

Collaboration spotlight 

InNervate AR: Creative Anatomy Collective  

An ongoing collaboration between Visualization and Anatomy students dynamically interacts with canine anatomy using Augmented Reality.

Created by Margaret Cook, graduate student in the Visualization department, InNervate AR is a mobile application for undergraduate canine anatomy education. Margaret pushes the boundaries of anatomy education by offering students a set of dynamic interactions to demonstrate relationships between the nerves and muscles of the canine front leg.

A user can view the canine front leg on a mobile phone once a visual marker is scanned by the phone camera using InNervate AR. The user can then explore the bones, nerves, and muscle groups. A second module focuses on nerves of the canine front limb, usually only labelled in diagrams for students. When anatomy students are asked questions about the repercussions of damage to various places along a nerve’s length, they often have trouble mentally visualizing an answer. This mobile AR application offers the chance for students to view a “healthy” animation of the leg’s range of motion. Next, the user has the ability to choose where they want to physically cut a nerve, and then watch an animation demonstrate which muscles have lost the ability to move. Therefore, students can better visualize how ranges of muscle movement are changed and effected, based upon which nerve functions remain.           

Margaret and her research team aim to provide an engaging way for anatomy students to dynamically interact with anatomical content, and as a result, feel more confident in their clinical and critical thinking skills. This project is a collaboration between two colleges: Dr. Jinsil Hwaryoung Seo, Austin Payne and Michael Bruner of the Visualization department in the College of Architecture, and Dr. Michelle Pine of the Veterinary Integrative Biosciences in the College of Veterinary Medicine and Biomedical Sciences.