"How can high dynamic range (HDR) images like those captured by human vision be most effectively reproduced? Susanto Rahardja, head of the Signal Processing Department at the A*STAR Institute for Infocomm Research (I2R), hit upon the idea of simulating the human brain’s mechanism for HDR vision. “We thought about developing a dynamic display system that could naturally and interactively adapt as the user’s eyes move around a scene, just as the human visual system changes as our eyes move around a real scene,” he says.
Two years ago, Rahardja initiated a program on HDR display bringing together researchers with a vriety of backgrounds. “We held a lot of brainstorming sessions to discuss how the human visual system perceives various scenes with different levels of brightness,” says Farzam Farbiz, a senior research fellow of the Signal Processing Department. They also read many books on cerebral physiology to understand how receptors in the retina respond to light and convert the data into electric signals, which are then transmitted to retinal ganglion cells and other neural cells through complex pathways in the visual cortex.
The EyeHDR system employs a commercial eye-tracker device that follows the viewer’s eyes and records the eyes’ reflection patterns. Using this data, the system calculates and determines the exact point of the viewer’s gaze on the screen using special ‘neural network’ algorithms the team has developed.
“On top of that, we also had to simulate the transitional latency of human eyes,” says Corey Manders, a senior research fellow of the Signal Processing Department. “When you move your gaze from a dark part of the room to a bright window, our eyes take a few moments to adjust before we can see clearly what’s outside,” adds Zhiyong Huang, head of the Computer Graphics and Interface Department. “This is our real natural experience, and our work is to reproduce this on-screen.”
The EyeHDR system calculates the average luminance of the region where the observer is gazing, and adjusts the intensity and contrast to optimal levels with a certain delay, giving the viewer the impression of a real scene. The system also automatically tone-maps the HDR images to low dynamic range (LDR) images in regions outside of the viewers gaze. Ultimately, the EyeHDR system generates multiple images in response to the viewer’s gaze, which contrasts with previous attempts to achieve HDR through the generation of a single, perfect HDR display image.
The researchers say development of the fundamental technologies for the system is close to complete, and the EyeHDR system’s ability to display HDR images on large LDR screens has been confirmed. But before the system can become commercially available, the eye-tracking devices will need to be made more accurate, robust and easier to use. As the first step toward commercialization, the team demonstrated the EyeHDR system at SIGGRAPH Asia 2009, an annual international conference and exhibition on digital content, held in Yokohama, Japan in December last year.
Although the team’s work is currently focused on static images, they have plans for video. “We would like to apply our technologies for computer gaming and other moving images in the future. We are also looking to reduce the realism gap between real and virtual scenes in emergency response simulation, architecture and science,” Farbiz says". (source)
- Susanto Rahardja, Farzam Farbiz, Corey Manders, Huang Zhiyong, Jamie Ng Suat Ling, Ishtiaq Rasool Khan, Ong Ee Ping, and Song Peng. 2009. Eye HDR: gaze-adaptive system for displaying high-dynamic-range images. In ACM SIGGRAPH ASIA 2009 Art Gallery & Emerging Technologies: Adaptation (SIGGRAPH ASIA '09). ACM, New York, NY, USA, 68-68. DOI=10.1145/1665137.1665187. (pdf, it's a one page poster)
No comments:
Post a Comment