Light field imaging, rendering and metrology.

Light-field display architecture

LightFieldArchitecture.jpg

Light-field Architecture

A light-field display projects 3D aerial imagery that is visible to the unaided eye (without glasses or head tracking) and allows for perspective correct visualization within the display’s projection volume.  A light-field display is essentially a large plenoptic projector.  Whereas a plenoptic camera captures a light-field in the form of a radiance image of a defined spatial and angular resolution, the light-field display computes a synthetic radiance image from a 3D scene/model and projects the radiance image through a microlens array to construct a 3D visual.  Binocular disparity, occlusion, specular highlights, gradient shading, and other expected depth cues are correct from the viewer’s perspective as in the natural real-world light-field.

As shown in Figure 1, a 3D application is responsible for fusing the appropriate 2D and 3D data sources to construct a 3D model/scene which is then streamed to the light-field display.  In this manner, the actual light-field display is application and data agnostic and can be used for a variety of purposes such as battlespace management and medical simulation/training.  User interaction with the light-field is typically accomplished by use of a 3D wand/pointer, touch screen, or gesture recognition system.  The host application is responsible for registering, tracking, and responding to input device actions within the light-field.

Within the light-field display proper there are four primary systems:

  1. Radiance Image (Hogel) Computation
    • Accepts a 3D model/scene and a 3D model of an image plane and generates the radiance image (hogel views).  
  2. Drive Electronics
    • The drive electronics manage the delivery of the radiance image pixel data to the spatial light modulators (SLMs) in the photonics subsystem.
  3. Photonics
    • The radiance image is converted into light/photons by an array of SLMs, which project the light-rays through an array of microlens (hogel optics). 
  4. Hogel Optics
    • The microlenses that angularly distribute the radiance image light-rays to construct a full-parallax 3D scene that is perspective correct for all views/viewers within the light-field display’s projection frustum.

Light-field Hogel Optics

 

In a light-field display (LfD), optical elements are used to project light emanating from a spatial light modulator (SLM) towards a viewer or viewers.  A correct and optimized optical design is absolutely critical for preserving detail within the light-field display 3D aerial projection.  The intent of a good optical design is that it not be the limiting factor in the quality of the projected 3D image.

 
BlueUPSD.jpg
OpticsImage.jpg

Designing the optics presents some challenges that are non-typical in the optics world; a LfD projects only a virtual object above [or below] the image plane.  There is no physical object for your eyes to focus on.  Compounding this problem, the human vision system (HVS) tends to focus on high contrast items within the visual field.  In an LfD, the lenslet array is a high contrast object, so the eyes tend to focus on it rather than on the virtual object that is being projected (as seen in Figure 2).  FoVI3D's approach is to reduce the lenslet size such that the HVS does not see the individual lenslets.  A typical viewer can resolve about 1 arcmin (~0.017°), which computes to ~0.23mm at a distance of 0.8m.  Said another way, if the lenslet-to-lenslet pitch is less than 0.23, it will be indistinguishable to an average viewer from a distance of 0.8m.  Size reduction comes at a cost as there are only a finite number of pixels that fit under each lenslet, and this number decreases as the lenslets get smaller, the number of distinct views possible from each lenslet is proportionally decreased.  This causes reduced 3D fidelity of the projected image.  This balance of lenslet size to number of pixels (and thus number of anglets that can be projected) is call the spatial-angular trade.

For these and other reasons, one of the most critical aspects of design is appropriate simulation.  Optical design programs (e.g. Zemax/OpticStudio and ASAP) are very effective for the design and simulation of traditional imaging optical systems.  That being said, FoVI3D has developed advanced, multi-perspective methods for optically simulating a light-field display.  The simulations have a very strong correlation to naked-eye viewing, allowing the 3D-display optical system to be optimized to the greatest extent possible.  A comparison of a simple system simulation and projected image can be seen in Figure 3.

photonics arrays

 

The spatial light modulator (SLM) in a light-field display (LfD) converts the radiance image into photons and is a critical factor in the projected 3D aerial image..  Generally, smaller pixel sizes (i.e. more pixels in the same amount of space) equate to higher quality projected images if the detail can be preserved when angularly disturbed by the hogel optics.  As such, the goal in the design of an LfD is to produce the smallest size pixel in the largest continuous bed of pixels possible.  For reference, a 55” 4K UHD TV has pixels that are ~0.3mm.  A pixel bed of this size would be great, but the pixel size is prohibitively large. 

 
LEDarray.jpg

Fortunately, there are devices which offer smaller pixel sizes (e.g. Liquid Crystal on Silicon (LCoS), OLED and Liquid Crystal, to name a few).  The smallest pixel sizes in these devices are less than 0.004mm, nearly two orders of magnitude smaller than pixels in a 4K TV.  These devices have the desirable small pixels, but unfortunately are not very large.    An example of this can be seen in Figure 4.  

At present, all SLMs with small pixels have similar form-factor issues.  As they are small, they must be tiled together to create a larger display.  Because they all have a bezel (or similar feature) around the outer edge of each device, they cannot be tiled side-by-side without introducing a large gap in pixels.  Therefore, to make a large display, the SLM’s projected image must be magnified so that the image created is larger than the SLM envelope.  Depending on the magnification method, the seams can be either minimized to a small fraction of their original (e.g. from 5mm to 0.05mm), or even sometimes eliminated.  One example of a magnified system can be seen in Figure 5.

Though this magnification is necessary, it is undesirable because it increases the effective pixel size in the system (often by 2X or more), which decreases the projected image resolution.  It is important to reduce the magnification as much as possible while still accounting for gaps between modulators and the assembly tolerances of the system. 

ELECTRONICS

 

FoVI3D designs electronics for every element of the Light-field display infrastructure.   This includes:

  • A specialized Multi-view Graphics Processing Unit (MvPU) used to render the light-field radiance image at a reduced SWaP-C over a traditional CPU/GPU rendering solution.  

  • Data distribution electronics that act as an interface between graphics rendering engines and our light field display technology. 

  • Photonics drivers and elements that convert bytes to photons.

  • Anything and everything else related to moving light-field data.

 

 

 

PCB.png

Data Distribution

Our system allows for rendered data input from multiple sources.  We utilize custom data distribution elements to act as our interface between the array of Light-field Processing Units and the outside world: i.e., the rendering sources.

 

Light field rendering

In the context of the light-field display, light-field rendering is the process by which the synthetic light-field radiance image is rendered.   The light-field radiance image is a raster description of a light-field where pixels represent the origin, direction, and intensity of light rays within the light-field.   Whereas, a light-field camera captures the light-field radiance image by segmenting incoming light through a microlens array, thus preserving spatial and angular details of rays in the form of pixels, the light-field display computes a synthetic radiance image from a 3D scene/model and projects the radiance image through a microlens array to construct a 3D aerial image.  Binocular disparity, occlusion, specular highlights, gradient shading, and other expected depth cues within the 3D aerial image are correct from the viewer’s perspective as in the natural real-world light-field.

 

 
Lahaina.png
ObliqueAndShear.png

Double Frustum Rendering

Double frustum rasterizing the radiance image entails placing the virtual camera at the hogel center in world space and rendered downward; then the camera is flipped and rendered upwards without clearing the depth buffer (Figure 6).  The front camera is rendered preserving triangles farthest from the camera, thus closer to the viewer.  When the two views are combined via their depth buffers, the hogel ‘bowtie’ frustum is created.  The ‘bowtie’ frustum represents the direction and intensity of light passing through the hogel center; thus, the rendered viewport is the hogel.  This process is repeated for every hogel for every update of the scene.

Oblique Slice & Dice Hogel Rendering

The Oblique Slice & Dice algorithm uses an orthographic camera projection with a shear applied to render all the pixels for a particular view direction during each pass of the geometry.   The shear matrix is adjusted for each projection ‘direction’ the light field display can produce (Figure 7).  The term Directional Resolution is often used to describe the number of views that a display can project within a given field of view (FoV).   A display that has a 90° FoV with 2562 pixels (rays) per hogel would require 2562 render passes with a -45° to 45° shear applied in 2 dimensions in (256/90) increments.

MultiViewHogelRendering.jpg

Multi-View Hogel Rendering

To implement multi-view rendering with a GPU requires that the host application dispatch a render pass for each viewpoint (microlens) required for the projection system.  Therefore, the cost of radiance image rendering in terms of power and time is a function of the number of microlenses, the number of GPUs rendering the radiance image, and the complexity of the input model.   Ultimately, the size, weight and power requirements of a light-field display are largely a factor of radiance image computation.  While it is possible to reduce the rendering load by spatially and/or temporally sampling rendered views or by tracking viewer head/eye position and orientation, these solutions can introduce additional artifacts into the light-field projection that degrade the visualization experience.  

multi-view processing unit

Multi-View Processing Unit

Light field rendering is the process of rendering all the perspective views present in the light-field from a 3D model regardless of viewer position.  Since the projection plane consists of numerous light field micro-projectors, rendering the light-field requires rendering from the point of view of each micro-projector.  In essence, the synthetic light-field is rendered from the perspective of the light-field display projection plane.  To update the light-field projection once requires rendering a unique image from each micro-projector position and orientation.  

GPUs are powerful and effective processors for rendering large framebuffers from a single point of view.  To implement multi-view rendering with a GPU requires that the host application dispatch a render pass for each viewpoint (mircolens) required for the projection system.  Therefore, the cost of radiance image rendering in terms of power and time is a function of the number of microlenses, the number of GPUs rendering the radiance image, and the complexity of the input model.   Ultimately, the size, weight and power requirements of a light-field display are largely a factor of radiance image computation.

FoVI3D’s Multi-view Processing Unit is being designed to rasterize multiple hogels in parallel without the need of an array of off-the-shelf GPUs.   By removing the OS, CPUs and other PC system components, a GPU-like MvPU is a more efficient light-field rendering engine.

 

calibration

Crisp and distortion free 3D content requires an accurate understanding of the geometric relationship between the thousands of projectors making up the lightfield display.  FoVI3D determines this relationship through an automated geometric calibration process.  A high resolution camera is used to measure display output through a series of test display patterns.  A fiducial marked test rig allows measurements to be made from multiple perspectives; this provides the calibration algorithm with the information necessary to accurately describe the entire lens array geometry to a multi-view rendering system for crystal clear 3D visualization of a virtual scene.

 
Fiducial.jpg
 

LIGHT-FIELD METROLOGY

 

FoVI3D is developing a Light-field Metrology Application (LMA) to automate the process of physical light-field display metrology in the four steps:

1. Projection: A series of 3D metrology reference models are rendered and projected within the FoLD visualization volume.

2. Capture: The projected 3D references are imaged using the camera imaging system as a 3D sensor.

3. Quantization: The captured images are analyzed and spatially decomposed into the appropriate 3D voxel databases.

4. Qualification: The final phase of the FoLD metrology process is the careful summation and analysis of the FoLD performance experiments to create FoLD metrology metrics and reports. During the qualification phase, the results of previous experiments are analyzed to determine the final light-field metrology; metrology reports or ranking are provided.

 
Display12_10Degree_20170327_smallestPossibleArea_LongRainbow_B.PNG

In recent years, a number of technology companies have produced auto-stereoscopic 3D volumetric and light-field displays as well as prototypes of varying capabilities. Qualifying their performance has been a subjective exercise as there is no common base for the determination of field of light displays (FoLD) metrology.  Similar metrology and standards definition was paramount to the successful commercialization and adoption of 2D displays.  Terms and concepts such as contrast ratio and pixels-per-inch are now common expressions to advertise and define 2D display capability.

Many of the 2D display metrics, such as contrast ratio, can be applied directly to the metrology of light-field displays.  However, as the FoLD produces a 3D projection within a prescribed visualization volume, the determination of contrast ratio for a light-field display will require a novel approach from the manner used to measure the contrast ratio of a 2D display. Existing 2D image and display terms such as vignetting may require extra qualification when applied to the description of the reduction of brightness by position within a 3D volume.

FoVI3D is developing a metrology system for FoLDs that is comprised of a two camera, stereo setup on a robotic arm.  This system is technology agnostic and mimics the human visual system.  The 3D nature of the image projected from FoLDs demands measurement techniques that account for the 3D nature of these displays.  Depending on technology, parameters to judge the performance of a FoLD may vary, but we believe that given any FoLD technology they should be measured for at least the following:

  • Resolution:  Resolution is the fundamental property needs to be measured for any display.  Display resolution can be measured using 2 dot testing and modulation transfer function.  We believe, both address different issues which merits the measurement of both for a given display.
  • Smallest projection factor: Any content pushed into a FoLD needs to be scaled to fit the display volume.  Depending on technology, this scale factor may be a function of the location in the display.
  • Smallest projected area:  In a 2D display, the smallest possible projection is defined as a pixel.  In FoLDs, smallest projection can be as simple as the size of the projecting element or can be a function of the properties of the projected element and their behavior over a distance from the element.
  • Projection accuracy:  Projection accuracy is the measure of how accurately can a FoLD project an image at a desired location in the display volume.
  • Color:  For color display, accuracy of color reproduction should also be measured.

The metrology application being developed at FoVI3D is called light-field metrology application (LMA). The primary purpose of the LMA is to guide the physical metrology process, collect the relevant metrics, and collate the data into report form. To measure FoLD projection properties, the LMA will project a series of 3D metrology references that can be viewed and imaged from multiple camera positions that surround the visualization volume. From the data captured, the light-field visualization and performance characteristics will be derived and stored in 3D voxel databases. Therefore, each voxel in the set records the performance of the display at that region of space. As the voxels and voxel database can contain the results of many tests, reports can be generated from the voxel information to characterize the display performance as a whole.

At FoVI3D, we are also working on the development of tools to visualize and analyze the metrology.  The analysis tool is called 3D metrology visualizer (3DMV).  This tool aims to help the user analyze a display or to be able to compare multiple display for easy referencing. 

Situational Awareness Visualization Environment for Cognitive Understanding

 

The SAVECU program at FoVI3D focuses on developing an advanced multi-human-machine interaction environment within which information can be visualized in multiple modalities and interacted with via intuitive touch gestures.  The end goal of this program is to generate a paradigm about which knowledge and understanding can be more rapidly transferred from person to person generating a revolution in collaborative human computer tasks.

 
LfD.jpg
HDE.jpg

Situational Awareness Visualization Environment for Cognitive Understanding

The human visual system, proprioceptors, and cognitive processes have co-evolved to provide a rich, accurate understanding of the 3D world at arm’s length.  Historically, direct observation and manipulation in this space has been at the forefront of understanding and communication via kinesthetic, hands-on, learning.  Situational awareness is the cornerstone of effective higher level executive functions, and shared situational awareness is the cornerstone of effective group action.  Now we have the capability to see patterns of movement from space, communicate in real-time across the globe, relay multiple perspectives of situational awareness simultaneously and enact change instantaneously.  Information, knowledge, and documentation continues to proliferate via novel technologies and presents the challenge of compressing data on a global scale into the one-meter sphere of understanding humans have been operating within for millennia.  The human mind’s ability to understand the terabytes of datasets being collected has reached a breaking point that requires computer augmentation to package understanding for us. The capability of computers to store, access, and visualize information is invaluable to their human users, but the interface by which humans and computers interact has not yet successfully broken out of the two-dimensional paradigm.  FoVI3D’s light field display (LfD) aims to bridge the cognitive human-computer-interface gap by presenting fully 3D, perspective correct projections of a virtual scene to multiple viewers simultaneously.  This disruptive technology generates the ability to resolve all 3D structure within a scene and naturally gesture within the scene itself.  True three-dimensional visualization improves our spatial understanding of the scene and as a result, reduces the cognitive load accompanying analysis and collaboration on complex tasks.  Presenting fully three-dimensional information visualizations to users in a glasses-free environment is the next revolution in information visualization technology.  

FoVI3D is developing concepts that aim to reduce the cognitive load of the Military Decision Making Process (MDMP) via light-field technology.  This concept was initiated as a part of the Army SBIR A16-044 Simple Cognitive Based Visualization.  The FoVI3D LfD is being designed as the centerpiece of our Situational Awareness Visualization Environment (SAVE).  SAVE will have multiple uses for training and real time blue force tracking.  In training, a central physics engine drives content to multiple, independent displays ranging from our LfD to VR headsets and traditional 2D displays allowing warfighters to observe multiple different perspectives of the battlespace.  SAVE will also have the real-time capability to operate with a Distributed Common Ground System-Army (DCGS-A) backed information environment.  This information system paired with the FoVI3D cognitive construction workflow being developed for the Intelligence Preparation of the Battlespace (IPB) will significantly lessen the duration of the MDMP and provide a commander a more intuitive visualization environment within which to make decisions.   The SAVE will allow commanders and staff to be aware of the same information simultaneously as well as providing a more streamlined method for order delegation and status tracking. This allows the commander to improve the quality and shorten the timeline of decision-making.