Light field Display, rendering, calibration and metrology.

Light-field display architecture

Lfd_TitlePage.jpg
LfD_Architecture.jpg

A Light-field Display (LfD) projects synthetically rendered light rays with all the essential depth and color cues that enable viewers to see a natural 3D scene. The LfD converts 3D scene data into light rays through the light-field display radiance image computation subsystem.  The radiance image is a raster/pixel representation of the light field, where every pixel represents the position and orientation of a light ray passing through the display surface. The radiance image is converted into actual light rays by an array of ultra-high resolution spatial light modulators (SLMs).   The light rays are then angularly distributed by a microlens array without regard to the number of viewers, viewer position, or viewer gaze direction.

Light-field Hogel Optics

 

In a light-field display (LfD), optical elements are used to project light emanating from a spatial light modulator (SLM) towards a viewer or viewers.  A correct and optimized optical design is absolutely critical for preserving detail within the light-field display 3D aerial projection.  The intent of a good optical design is that it not be the limiting factor in the quality of the projected 3D image.

 
BlueUPSD.jpg
LfD_HogelOptics.jpg

The hogel optics consist of an array of microlenses used to angularly distribute light produced by the photonics subsystem to create a light field.  An optimized optical design is critical for preserving detail within the light-field display’s 3D aerial projection.

As mentioned earlier, the Light-field Display projects light rays that would naturally reflect off an object.  Consequently, each eye sees only one ray from each hogel microlens at any time.  When the viewer changes position within the light field, they see a different ray of light from each hogel optic depending upon their viewing position. 

The aggregation of light on the human visual system (HVS) allows for the natural perception of depth in a scene.

photonics arrays

 

The spatial light modulator (SLM) in a light-field display (LfD) converts the radiance image into photons and is a critical factor in the projected 3D aerial image..  Generally, smaller pixel sizes (i.e. more pixels in the same amount of space) equate to higher quality projected images if the detail can be preserved when angularly disturbed by the hogel optics.  As such, the goal in the design of an LfD is to produce the smallest size pixel in the largest continuous bed of pixels possible.  For reference, a 55” 4K UHD TV has pixels that are ~0.3mm.  A pixel bed of this size would be great, but the pixel size is prohibitively large. 

 
LEDarray.jpg
LfD_Photonics.jpg

The purpose of light-field photonics is to convert a pixel/raster image (radiance image) representation of light rays into actual rays of light.  Therefore, pixel density of the photonics subsystem is crucial to high-fidelity light-field images.   While a high-resolution 4K UHD TV has pixels that are ~330 µm and is intended to be viewed from across a room, the light-field display technology employed at FoVI3D has 5-10 µm pixels and is designed to be viewed and interacted with at arm’s length.  Therefore, a single light-field display may have hundreds of millions of pixels/rays.

 

Light field rendering

In the context of the light-field display, light-field rendering is the process by which the synthetic light-field radiance image is rendered.   The light-field radiance image is a raster description of a light-field where pixels represent the origin, direction, and intensity of light rays within the light-field.   Whereas, a light-field camera captures the light-field radiance image by segmenting incoming light through a microlens array, thus preserving spatial and angular details of rays in the form of pixels, the light-field display computes a synthetic radiance image from a 3D scene/model and projects the radiance image through a microlens array to construct a 3D aerial image.  Binocular disparity, occlusion, specular highlights, gradient shading, and other expected depth cues within the 3D aerial image are correct from the viewer’s perspective as in the natural real-world light-field.

 

 
Lahaina.png
RadianceImageComputation.png

Multi-view processing unit

 

FoVI3D designs electronics for every element of the Light-field display infrastructure.   This includes:

  • A specialized Multi-view Graphics Processing Unit (MvPU) used to render the light-field radiance image at a reduced SWaP-C over a traditional CPU/GPU rendering solution.  

  • Data distribution electronics that act as an interface between graphics rendering engines and our light field display technology. 

  • Photonics drivers and elements that convert bytes to photons.

  • Anything and everything else related to moving light-field data.

 

 

 

MvPU_bg.jpg
mVpu.jpg
 

calibration

 
Fiducial.jpg
LfD_Calibration.jpg
 

All types of displays (2D or 3D) require calibration.  Light-field displays are no different.  The aggregation of millions of tiny rays of light into a high-fidelity 3D aerial image is no trivial task.  Even the smallest misalignments in the subsystems will interfere with the intended distribution of light rays, introducing blur into the 3D aerial image.

To produce a crisp, high-fidelity light-field projection, FoVI3D has developed a patented process to calibrate light-field displays.   This complex calibration process analyzes thousands of images captured from multiple perspectives and applies advanced algorithms to optimize the light-field projection through a series of spatial and color transforms.
 

 
 

LIGHT-FIELD Display METROLOGY

 

There can be no acceptance of a display technology without standardization.  FoVI3D, with support from members of the International Committee of Display Metrology (ICDM), is defining light-field metrology metrics and developing a Light-field Metrology Application (LMA) system that automates the collection and qualification of light-field displays in four steps:

1. Projection: A series of 3D metrology reference models are rendered and projected within the 3D display visualization volume.

2. Capture: The projected 3D references are imaged using a camera system from multiple perspectives.

3. Quantization: The captured images are analyzed and spatially decomposed into an appropriate 3D voxel database.

4. Qualification:  The final phase of the metrology process is the careful summation and analysis of the performance experiments to create metrology metrics and reports. 

 
Display12_10Degree_20170327_smallestPossibleArea_LongRainbow_B.PNG
LfD_Metrology.jpg

SIMPLE COGNITIVE BASED Visualization

 

The Simple Cognitive Based Visualization program at FoVI3D focuses on developing an advanced multi-human-machine interaction environment within which information can be visualized in multiple modalities and interacted with via intuitive touch gestures.  The end goal of this program is to generate a paradigm about which knowledge and understanding can be more rapidly transferred from person to person generating a revolution in collaborative human computer tasks.

 
LfD.jpg
HDE.jpg

Simple Cognitive Based Visualization

The human visual system, proprioceptors, and cognitive processes have co-evolved to provide a rich, accurate understanding of the 3D world at arm’s length.  Historically, direct observation and manipulation in this space has been at the forefront of understanding and communication via kinesthetic, hands-on, learning.  Situational awareness is the cornerstone of effective higher level executive functions, and shared situational awareness is the cornerstone of effective group action.  Now we have the capability to see patterns of movement from space, communicate in real-time across the globe, relay multiple perspectives of situational awareness simultaneously and enact change instantaneously.  Information, knowledge, and documentation continues to proliferate via novel technologies and presents the challenge of compressing data on a global scale into the one-meter sphere of understanding humans have been operating within for millennia.  The human mind’s ability to understand the terabytes of datasets being collected has reached a breaking point that requires computer augmentation to package understanding for us. The capability of computers to store, access, and visualize information is invaluable to their human users, but the interface by which humans and computers interact has not yet successfully broken out of the two-dimensional paradigm.  FoVI3D’s light field display (LfD) aims to bridge the cognitive human-computer-interface gap by presenting fully 3D, perspective correct projections of a virtual scene to multiple viewers simultaneously.  This disruptive technology generates the ability to resolve all 3D structure within a scene and naturally gesture within the scene itself.  True three-dimensional visualization improves our spatial understanding of the scene and as a result, reduces the cognitive load accompanying analysis and collaboration on complex tasks.  Presenting fully three-dimensional information visualizations to users in a glasses-free environment is the next revolution in information visualization technology.  FoVI3D is developing concepts that aim to reduce the cognitive load of the Military Decision Making Process (MDMP) via light-field technology.  This concept was initiated as a part of the Army SBIR A16-044 Simple Cognitive Based Visualization.