Why Do Light Field Displays Matter?
Human binocular vision and acuity, along with the accompanying 3D retinal processing of the human eye and brain are specifically designed to promote situational awareness and understanding in the natural 3D world. The ability to resolve depth within a scene whether natural or artificial improves our spatial understanding of the scene and as a result reduces the cognitive load accompanying the analysis and collaboration on complex tasks.
A light-field display projects 3D imagery that is visible to the unaided eye (without glasses or head tracking) and allows for perspective correct visualization within the display’s projection volume. A light-field display is essentially a large plenoptic projector.
Whereas a plenoptic camera (as developed by Lytro and RayTrix) captures a light-field in the form of a radiance image of a defined spatial and angular resolution, the light-field display computes a synthetic radiance image from a 3D scene/model and projects the radiance image through a microlens array to construct a 3D visual. Binocular disparity, occlusion, specular highlights and gradient shading, and other expected depth cues are correct from the viewer’s perspective as in the natural real-world light-field.
Light-field Visualization Performance Studies
In recent years, studies exemplifying the benefits of 3D light-field visualizations for mission/medical planning, training, and rehearsal have been performed:
· Investigating Geospatial Holograms for Special Weapons and Tactics Teams by Sven Fuhrmann,et al., which found mission planning with the 3D light-field visuals increased mission performance from 25-53%.
· Medical Holography for Basic Anatomy Training by Matthew Hackett which demonstrated ~20% increase in memory retention with a lower cognitive load in training students using 3D visuals of human anatomy over traditional textbooks data shown below.
In addition, collaboration between multiple viewers is a common occurrence. New visualization technologies will need to support multiple viewers without the use of stereoscopic glasses or eye or head tracking peripherals. In some existing visualization environments, high resolution 2D screens are used with special glasses or lenticular lenses to display 3D imagery with software algorithms simulating perspective. As documented by Air Force Research Laboratory (AFRL), many of these systems that try to provide 3D visualizations through eye-tracked or stereo projections can induce eye fatigue and nausea in the viewer due to conflicting depth cues. In addition, these displays cannot be used for natural 3D collaboration since the visualizations offer only a single point of view (POV) or perspective. These factors greatly limit the effectiveness of these eye-tracked or stereoscopic displays for viewing complex 3D data especially in a collaborative environment.
Across commercial markets, the access to 3D data is increasing exponentially, but the ability exploit and visualize this data is not keeping pace. This is increasing the cognitive load on decision makers and analysts slowing their ability to act. Ability to collaboratively view 3D content on a LfD allows faster decisions with greater confidence reducing the ambiguity inherent in other forms of visualization.
What is a Light-field?
A light-field can be described as a set of rays that pass through every point in space and is typically parameterized for computer vision in terms of a plenoptic function:
Where ( Vx, Vy, Vz) defines a point in space, (Theta, Phi) defines the orientation or direction of a ray, Lambda defines the wavelength, and t is the time component. By ignoring the time component which is an attribute of a ray, the light-field can be reduced to a 6D plenoptic function as shown in diagram below and which can be described by a radiance image.
The light-field display radiance image is a raster description of a light-field which can be projected through a microlens array to reconstruct a 3D image in space. The radiance image consists of a 2D array of hogels that represent the origin, direction, and intensity of light rays within the light-field as described by the plenoptic function in the diagram above.
In this manner, the light-field display radiance image is similar to the radiance image as captured by a plenoptic camera; however, the hogel can represent light rays on either side of the light-field display image plane as shown in the figure below. This capability effectively doubles the projection depth of a light-field display. The light-field display radiance image is synthetically constructed by rendering hogel views from the perspectives of an array of micro-lenses defined in model space, and as such, requires a 3D model of a scene as input to the radiance image/hogel rendering engine. Therefore, many hogel views are rendered to create one radiance image per update of the light-field display.
FoVI 3D Light Field Display
As shown in Figure below, the 3D model input to the light-field display is sourced from a 3D application hosted on an external computer. The 3D application is responsible for fusing the appropriate 2D and 3D data sources to construct a 3D scene which is then streamed to the light-field display. In this manner, the actual light-field display is application and data agnostic and can be used for a variety of purposes such as battlespace management, simulation, and warfighter training. User interaction with the light-field is typically accomplished by use of a 3D wand/pointer, touch screen, or gesture recognition system. The host application is responsible for registering, tracking, and responding to input device actions within the light-field.
Within the light-field display proper there are four primary systems:
- Radiance Image (Hogel) Computation: Accepts a 3D model/scene and a 3D model of an image plane and generates the radiance image (hogel views). Whether internalized into light-field display infrastructure or realized as a cluster of off-the-shelf CPUs/GPUs, this subsystem contributes most of the SWaP cost of the light-field display as a whole.
- Drive Electronics: The drive electronics manage the delivery of the radiance image pixel data to the SLMs in the photonics subsystem.
- Photonics: The radiance image is converted into light-rays/photons by an array of spatial light modulators (SLMs) which project the light-rays through an array of micro-lens (hogel optics).
- Hogel Optics: The micro-lenses that angularly distribute the radiance image light-rays to construct a full-parallax 3D scene that is perspective correct for all views/viewers within the light-field display’s projection frustum.
DARPA 1st Generation Light Field Display Prototype
The origin of FoVI3D‘s light-field display technology was the DARPA Urbanscape Photonic Sandtable Display (UPSD) Program. Within the UPSD program, the first large-area, dynamic holographic light-field display table for real-time battlefield visualization was developed. Four of the prototype light-field displays were created and transitioned to research labs within the Department of Defense.
Below are videos of content being displayed on this 1st Generation Light Field Display:
FoVI 3D 2nd Generation Light Field Display
FoVI 3D is currently producing a 2nd Generation Light Field Display. This display will offer four times the number of pixels as the 1st Generation display significantly improving the image fidelity along with removing the visual artifacts seen within the 1st Generation display including removing flicker, non-uniform color, and the visible seems.
FoVI 3D will be delivering the below light field display developer kit, FoVI DK2 in Q3 2017.
Hogel rendering is the process of rendering a synthetic radiance image/dataset for a light-field display. A hogel is similar to a micro image in a radiance image as captured by a plenoptic camera. The main distinction (besides being synthetically rendered) is that the hogel can represent light-rays on either side of the image plane. This ability allows for projected 3D light-field content to be seen on either side of the light-field display image plane, effectively doubling the projection depth.
Hogel rendering requires a 3D model and a mathematical description of a light-field display image plane; in particular the exact locations of the optical axis of each micro-lens within the projection system. A hogel is rendered at the center of each micro-lens in model space. There are a couple of algorithms for rendering hogels for polygonal models including “orthographic slice and dice” and the fast double-frustum renderer as described in the paper “Fast Computer Graphics Rendering for Full Parallax Spatial Displays”.
Example of dynamic/live light-field rendering from an extremely large model at different resolutions
Example, hogel rendering for a static display
Static Hogel Rendering on Raspberry Pi2 Model b
Light-field Display Metrology
In recent years, a number of technology companies have produced auto-stereoscopic 3D volumetric and light-field displays and prototypes of varying capabilities. Qualifying their performance has been a subjective exercise as there is no common base for the determination of field of light displays (FoLD) metrology. Similar metrology and standards definition was paramount to the successful commercialization and adoption of 2D displays. Terms and concepts such as contrast ratio and pixels-per-inch are now common expressions to advertise and define 2D display capability.
Many of the 2D display metrics such as contrast ratio can be applied directly to the metrology of light-field displays. However, as the FoLD produces a 3D projection within a prescribed visualization volume, the determination of contrast ratio for a light-field display will require a novel approach from the manner used to measure the contrast ratio of a 2D display. Existing 2D image and display terms such as vignetting may require extra qualification when applied to the description of the reduction of brightness by position within a 3D volume.
In addition, new metrology concepts will need to be explored to describe the FoLD contrast ratio as a function of location within the visualization volume or as a function of viewing perspective if the FoLD contrast is not uniform throughout the projection volume. Novel terms, such as perspective depth, may need to be defined to help describe FoLD attributes such as contrast when viewed from a particular perspective and/or depth within the visualization volume. This will help create a common reference for understanding and describing light-field display capabilities.
The effect of light-field attributes such as contrast on the human visualization system will also need to be explored and evaluated. For example, small changes in contrast greatly affect the human perception of depth in the natural world. The ability for the light-field display to project discernable depth separation along or between surfaces at different depths is a key factor in human spatial acuity and can be measured to derive a depth-contrast sensitivity metric for the FoLD.
It is also important that the terminology and procedures used to describe FoLD metrology be agnostic to the applied FoLD technology so that display capabilities can be fairly compared during the evolution and eventual productization of light-field displays.
FoVI 3D's FoLD metrology solution will play a key role in evaluating and qualifying the performance of 3D light-field displays by defining and automating the FoLD metrology process. The intent of this program is to develop practical, affordable, and repeatable evaluation procedures and design a suite of light-field metrology applications that will qualify any 3D light-field display and produce results that can be used to evaluate FoLD metrology regardless of the FoLD implementation technology