Monday, May 2, 2011

1st International Workshop on Pervasive Eye Tracking and Mobile Eye-Based Interaction

During the UbiComp 2011 in Beijing in September the 1st International Workshop on Pervasive Eye Tracking and Mobile Eye-Based Interaction (PETMEI) will be held. Keynote speaker is Jeff B. Pelz who has considerable experience with eye tracking during natural tasksThe call for paper is out, see details below.
"Recent developments in mobile eye tracking equipment and automated eye movement analysis point the way toward unobtrusive eye-based human-computer interfaces that are pervasively usable in everyday life. We call this new paradigm pervasive eye tracking – continuous eye monitoring and analysis 24/7. The potential applications for the ability to track and analyze eye movements anywhere and anytime call for new research to further develop and understand visual behaviour and eye-based interaction in daily life settings. PETMEI 2011 will focus on pervasive eye tracking as a trailblazer for mobile eye-based interaction and eye-based context-awareness. We provide a forum for researchers from human-computer interaction, context-aware computing, and eye tracking to discuss techniques and applications that go beyond classical eye tracking and stationary eye-based interaction. We want to stimulate and explore the creativity of these communities with respect to the implications, key research challenges, and new applications for pervasive eye tracking in ubiquitous computing. The long-term goal is to create a strong interdisciplinary research community linking these fields together and to establish the workshop as the premier forum for research on pervasive eye tracking."
Important Dates
  • Paper Submission: May 30, 2011
  • Notification of Acceptance: June 27, 2011
  • Camera-ready due: July 11, 2011
  • Workshop: September 18, 2011


Topics
Topics of interest cover computational methods, new applications and use cases, as well as eye tracking technology for pervasive eye tracking and mobile eye-based interaction. Topics of interest include, but are not limited to:


Methods
  • Computer vision tools for face, eye detection and tracking
  • Pattern recognition/machine learning for gaze and eye movement analysis
  • Integration of pervasive eye tracking and context-aware computing
  • Real-time multi-modality sensor fusion
  • Techniques for eye tracking on portable devices
  • Methods for long-term gaze and eye movement monitoring and analysis
  • Gaze modeling for development of conversational agents
  • Evaluation of context-aware systems and interfaces
  • User studies on impact of and user experience with pervasive eye tracking
  • Visual and non-visual feedback for eye-based interfaces
  • Interaction techniques including multimodal approaches
  • Analysis and interpretation of attention in HCI
  • Dual and group eye tracking
Applications
  • Mobile eye-based interaction with public displays, tabletops, and smart environments
  • Eye-based activity and context recognition
  • Pervasive healthcare, e.g. mental health monitoring or rehabilitation
  • Autism research
  • Daily life usability studies and market research
  • Mobile attentive user interfaces
  • Security and privacy for pervasive eye tracking systems
  • Eye tracking in automotive research
  • Eye tracking in multimedia research
  • Assistive systems, e.g. mobile eye-based text entry
  • Mobile eye tracking and interaction for augmented and virtual reality
  • Eye-based human-robot and human-agent interaction
  • Cognition-aware systems and user interfaces
  • Human factors in mobile eye-based interaction
  • Eye movement measures in affective computing
Technologies
  • New devices for portable and wearable eye tracking
  • Extension of existing systems for mobile interaction
See the submission details for more information. 

Friday, April 29, 2011

GazeGroup's Henrik Skovsgaard wins "Stars with brains" competiton

During the Danish Research Day 2011 Henrik Skovsgaard, PhD candidate at @ ITU Copenhagen, won the competition "Stars with Brains" (Stjerner med hjerner). Several high profile individuals (stars) were present including the Minister of Science, Princess Marie and Mayor Frank Jensen. The competition consisted of eight doctoral students (brains) from universities across Denmark who presented their research in a layman terms. The audience voted on their favorite candidate using SMS messaging whereby a panel of judges evaluated the participants. Later in the day Henrik was invited to an interview on the Aftenshow on national TV. Henriks research at the IT University of Copenhagen focuses primarily on gaze-based interaction as a communication tool for disabled and have participated in the development of the Gazegroup.org software. A big congrats to Henrik for the award, excellent public outreach and associated stardom!

PhD student Henrik Skovsgaard won the "Stars with brains". Photo: Tariq Mikkel Khan (source)

From right: Mayor Frank Jensen, HRH Princess Marie and Minister of Science Charlotte Sahl-Madsen. Photo: Tariq Mikkel Khan (source)



Wednesday, April 27, 2011

Specs for SMI GazeWear released

The specifications for the SMI GazeWear has just been announced. The head mounted tracker takes the shape of a pair of glasses and has a impressive set of features. It offers 30Hz binocular tracking (both eyes) at 0.5 deg accuracy with automatic parallax compensation for accurate gaze estimation over distances above 40cm. The dark pupil, corneal reflection based system has a tracking range of 70° horizontal / 55°. vertical angle. SMI has managed to squeeze in a HD scene camera located in the center of the frame which offers 1280x960 resolution at 30 frames per second. However, the viewing angle is slightly smaller than the tracking range at 63° horizontal and 41° vertical angle. The weight of the device is specified to 75 grams with the dimensions of 173x58x168mm (w/h/d) and is estimated to fit subjects above age 7.

SMI GazeWear
A mobile recording unit is offered which stores data on a SD card, weighs 420 grams, and has minimum of 40 minutes recording time. However, a subnotebook can be used to extend recording time towards two hours.   

With the new tracker SMI seriously improves their offering in the head mounted segment with a form factor that certainly appears more attractive to a wide range of applications. The specs stands up well against the Tobii glasses which has a similar form but is limited to monocular tracking and a lower resolution scene camera.  No details on availability is provided other than "coming soon", something we heard since late December. Once they are out the game is on. 

The flyer may be downloaded as pdf.

Tuesday, April 26, 2011

Development of a head-mounted, eye-tracking system for dogs (Williams et al, 2011)

Fiona Williams, Daniel Milss and Kun Guo at the University of Lincoln have developed a head mounted eye tracking system for our four legged friends. Using a special construct based on a head strap and a muzzle the device was mounted on the head of the dog where a dichroic mirror placed in front of one of the eyes reflects the IR image back to the camera.


The device was adapted from a VisionTrack system by IScan/Polhemus and contains two miniature cameras, one for the eye and one for the scene which is connected to a host workstation. When used with human subject such setup provides 0.3 deg. of accuracy according to the manufacturer. Williams et al obtained an accuracy of 2-3 deg. from a single dog when using a special calibration method containing five points located on a cross which was mounted at the tip of the muzzle. Using positive reenforcement the dog was gradually trained to wear and fixate targets which I'm sure wasn't an easy task.


Abstract:
Growing interest in canine cognition and visual perception has promoted research into the allocation of visual attention during free-viewing tasks in the dog. The techniques currently available to study this (i.e. preferential looking) have, however, lacked spatial accuracy, permitting only gross judgements of the location of the dog’s point of gaze and are limited to a laboratory setting. Here we describe a mobile, head-mounted, video-based, eye-tracking system and a procedure for achieving standardised calibration allowing an output with accuracy of 2–3◦. The setup allows free movement of dogs; in addition the procedure does not involve extensive training skills, and is completely non-invasive. This apparatus has the potential to allow the study of gaze patterns in a variety of research applications and could enhance the study of areas such as canine vision, cognition and social interactions.

  • Fiona J. Williams, Daniel S. Mills, Kun Guo, Development of a head-mounted, eye-tracking system for dogs, Journal of Neuroscience Methods, Volume 194, Issue 2, 15 January 2011, Pages 259-265, ISSN 0165-0270, DOI: 10.1016/j.jneumeth.2010.10.022. (available from ScienceDirect)

Wednesday, April 20, 2011

Fraunhofer CMOS-OLED Headmounted display with integrated eye tracker

"The Fraunhofer IPMS works on the integration of sensors and microdisplays on CMOS backplane for several years now. For example the researchers have developed a bidirectional microdisplay, which could be used in Head-Mounted Displays (HMD) for gaze triggered augmented-reality (AR) aplications. The chips contain both an active OLED matrix and therein integrated photodetectors. The combination of both matrixes in one chip is an essential possibility for system integrators to design smaller, lightweight and portable systems with both functionalities." (Press release)
"Rigo Herold, PhD student at Fraunhofer IPMS and participant of the development team, declares: This unique device enables the design of a new generation of small AR-HMDs with advanced functionality. The OLED microdisplay based Eyetracking HMD enables the user on the one hand to overlay the view of the real world with virtual contents, for example to watch videos at jog. And on the other hand the user can select the next video triggered only by his gaze without using his hands." (Press release)

Sensor integrates both OLED display and CMOS imaging sensor. 

Rigo Herold will present the system at the SID 2011 exhibitor forum at May 17, 2011 4:00 p.m.: Eyecatcher: The Bi-Directional OLED Microdisplay with the following specs:
  • Monochrome 
  • Special Eyetracking-Algorithm for HMDs based on bidirectional microdisplays
  • Front brightness: > 1500 cd/m²

Poster was presented at ISSCC 2011 : Industry Demonstration Session (IDS). Click to enlarge

In addition there is a paper titled "Bidirectional OLED microdisplay: Combining display and image sensor functionality into a monolithic CMOS chip" published with the following abstract:. 

"Microdisplays based on organic light-emitting diodes (OLEDs) achieve high optical performance with excellent contrast ratio and large dynamic range at low power consumption. The direct light emission from the OLED enables small devices without additional backlight, making them suitable for mobile near-to-eye (NTE) applications such as viewfinders or head-mounted displays (HMD). In these applications the microdisplay acts typically as a purely unidirectional output device [1–3]. With the integration of an additional image sensor, the functionality of the microdisplay can be extended to a bidirectional optical input/output device. The major aim is the implementation of eye-tracking capabilities in see-through HMD applications to achieve gaze-based human-display-interaction." Available at IEEE Xplore

Monday, April 18, 2011

AutomotiveUI'11 - 3rd International Conference On Automotive User Interfaces and Interactive Vehicular Applications

"In-car interactive technology is becoming ubiquitous and cars are increasingly connected to the outside world. Drivers and passengers use this technology because it provides valuable services. Some technology, such as collision warning systems, assists drivers in performing their primary in-vehicle task (driving). Other technology provides information on myriad subjects or offers entertainment to the driver and passengers.

The challenge that arises from the proliferation of in-car devices is that they may distract drivers from the primary task of driving, with possibly disastrous results. Thus, one of the major goals of this conference is to explore ways in which in-car user interfaces can be designed so as to lessen driver distraction while still enabling valuable services. This is challenging, especially given that the design of in-car devices, which was historically the responsibility of car manufacturers and their parts suppliers, is now a responsibility shared among a large and ever-changing group of parties. These parties include car OEMs, Tier 1 and Tier 2 suppliers of factory-installed electronics, as well as the manufacturers of hardware and software that is brought into the car, for example on personal navigation devices, smartphones, and tablets.

As we consider driving safety, our focus in designing in-car user interfaces should not be purely on eliminating distractions. In-car user interfaces also offer the opportunity to improve the driver¹s performance, for example by increasing her awareness of upcoming hazards. They can also enhance the experience of all kinds of passengers in the car. To this end, a further goal of AutomotiveUI 2011 is the exploration of in-car interfaces that address the varying needs of different types of users (including disabled drivers, elderly drivers or passengers, and the users of rear-seat entertainment systems). Overall our goal is to advance the state of the art in vehicular user experiences, in order to make cars both safer and more enjoyable places to spend time." http://www.auto-ui.org



Topics include, but are not limited to:
* new concepts for in-car user interfaces
* multimodal in-car user interfaces
* in-car speech and audio user interfaces
* text input and output while driving
* multimedia interfaces for in-car entertainment
* evaluation and benchmarking of in-car user interfaces
* assistive technology in the vehicular context
* methods and tools for automotive user interface research
* development methods and tools for automotive user interfaces
* automotive user interface frameworks and toolkits
* detecting and estimating user intentions
* detecting/measuring driver distraction and estimating cognitive load
* biometrics and physiological sensors as a user interface component
* sensors and context for interactive experiences in the car
* user interfaces for information access (search, browsing, etc.) while driving
* user interfaces for navigation or route guidance
* applications and user interfaces for inter-vehicle communication
* in-car gaming and entertainment
* different user groups and user group characteristics
* in-situ studies of automotive user interface approaches
* general automotive user experience research
* driving safety research using real vehicles and simulators
* subliminal techniques for workload reduction



SUBMISSIONS
AutomotiveUI 2011 invites submissions in the following categories:

* Papers (Submission Deadline: July 11th, 2011)
* Workshops (Submission Deadline: July 25th, 2011)
* Posters & Interactive Demos (Submission Deadline: Oct. 10th, 2011)
* Industrial Showcase (Submission Deadline:  Oct. 10th, 2011)

For more information on the submission categories please check http://www.auto-ui.org/11/submit.php

Thursday, April 7, 2011

FaceAPI signs licence deal with Chinese SuperD

Remember the glasses-free 3D displays demonstrated earlier this year at CES2011? Seeing Machines recently announced a production licence deal with Chinese Shenzhen Super Perfect Optics Limited (SuperD). The two companies have been working together for the last 12 months and the first consumer products are expected to be available during the summer. Big ambition, millions of devices including laptops, monitors and all-in-one-PCs by big name manufacturers. Interesting development as they know eye tracking too, please make that happen. Press release available here.

SMI iView X SDK 3.0 released

SMI just released version 3.0 of their Software Development Kit (SDK) which contains low and high level functions, documentation and sample code (matlab, e-prime, c/c++, Python and C#). The SDK supports Windows XP, Vista and 7 (both 32 and 64 bit). Available by for free for existing customers. Good news for developers, especially the 64-bit version for Windows 7. Releasing extensive and well documented SDKs for free is a trend that has been adopted by most manufacturers by now, it just makes perfect sense.

Monday, March 14, 2011

Mirametrix acquired by TandemLaunch Technologies

MONTREAL (Quebec), February 18, 2011 – TandemLaunch Technologies today announced that it has completed the acquisition of all assets and staff of Vancouver-based Mirametrix Research Inc., a privately held provider of gaze tracking technology. Mirametrix is a technology company offering affordable gaze tracking systems for application in vision research and content analytics. The technology acquired through Mirametrix complements TandemLaunch’s consumer gaze tracking portfolio. Terms of the transaction were not disclosed.
“Mirametrix is an innovative small company that has successfully introduced gaze tracking solutions for cost-competitive applications. TandemLaunch offers the resources to scale the Mirametrix business and ultimately bring gaze tracking into the consumer market” said Helge Seetzen, CEO of TandemLaunch.
The website has been updated revealing the new executive team, product offering appears to remain the same for the time being. Helge Seetzen (blog) is an entrepreneur who sold his previous company, BrightSide, to Dolby Technologies for ~$30 million which he invested into the TandemLaunch incubator which focuses on early stages in technology development with the aim bring in industry partners to acquire the technology for further commercialization (interview). 

Congrats to Craig Hennessey, founder of Mirametrix, who is now well on his way to commercialize his PhD research (1, 2) on remote eye tracking based on a single camera setup, bright pupil and corneal reflections. It will be interesting to see how additional resources backing the operation will affects the industry and what role an affordable but perhaps less accurate system has to play. The latter can be improved upon but what about the market, will an affordable system expand or create new segments? From the top of my head, Yes. Time will tell..

Wednesday, March 9, 2011

Sunday, March 6, 2011

Thursday, March 3, 2011

CUShop concept @ Clemson University

"Clemson University architecture students are working with the packaging science department in designing an eye tracking lab to be a fully immersive grocery store shopping experience. This concept explores the entrance into the lab through a vestibule space created by two sliding glass doors, mimicking the space found in many grocery stores."





Wednesday, March 2, 2011

Accurate eye center localisation for low-cost eye tracking

Fabian Timm from the Lübeck University Institute for Neuro and Bioinformatics demonstrate a "novel approach for accurate localisation of the eye centres (pupil) in real time. In contrast to other approaches, we neither employ any kind of machine learning nor a model scheme - we just compute dot products! Our method computes very accurate estimations and can therefore be used in real world applications such as eye (gaze) tracking." Sounds great, any ideas on gaze estimation and accuracy?

Head-mounted eye-tracking application for driving

Nicolas Schneider have for his masters thesis modified the ITU Gaze Tracker for eye tracking in an automotive setting. It incorporates a scene camera and software that calibrates and integrates it in the platform. The project was carried out at Schepens Eye Research Institute at Harvard and there is a good chance it will be released open source. A fine piece of work and an awesome addition to the framework. We're impressed by the results. More info to follow, for now enjoy this video.



  • Nicolas Schneider, Peter Bex, Erhardt Barth, and Michael Dorr. 2011. An open-source low-cost eye-tracking system for portable real-time and offline tracking. In Proceedings of the 1st Conference on Novel Gaze-Controlled Applications (NGCA '11). ACM, New York, NY, USA, , Article 8 , 4 pages. (Full text: PDF Online)


Wednesday, February 16, 2011

A self-calibrating, camera-based eye tracker for the recording of rodent eye movements (Zoccolan et al, 2010)

Came across an interesting methods article in "Frontiers in Neuroscience" published in late November last  year which involves the development of a fully automated eye tracking system which is calibrated without requiring co-operation from the subject. This is done by fixing the location of the eye and moving the camera to establish a geometric model (also see Stahl et al, 2000, 2004). Apparently they attempted to use a commercial EyeLink II device first but found it not suitable for rodent eye tracking due to thresholding implementation, illumination conditions and failing corneal reflection tracking when the rodent was chewing. So the authors built their own solution using a Prosilica camera and a set of algorithms (depicted below). Read the paper for implementation details. I find it to be a wonderful piece of work, different  from human eye tracking for sure but still relevant and fascinating.

Schematic diagram of the eye-tracking system

 Illustration of the algorithm to track the eye’s pupil and corneal
reflection spot.

Eye coordinate system and measurements


Horizontal and vertical alignment of the eye with the center of
the camera’s sensor.


Abstract:

"Much of neurophysiology and vision science relies on careful measurement of a human or animal subject’s gaze direction. Video-based eye trackers have emerged as an especially popular option for gaze tracking, because they are easy to use and are completely non-invasive. However, video eye trackers typically require a calibration procedure in which the subject must look at a series of points at known gaze angles. While it is possible to rely on innate orienting behaviors for calibration in some non-human species, other species, such as rodents, do not reliably saccade to visual targets, making this form of calibration impossible. To overcome this problem, we developed a fully automated infrared video eye-tracking system that is able to quickly and accurately calibrate itself without requiring co-operation from the subject. This technique relies on the optical geometry of the cornea and uses computer-controlled motorized stages to rapidly estimate the geometry of the eye relative to the camera. The accuracy and precision of our system was carefully measured using an artificial eye, and its capability to monitor the gaze of rodents was verified by tracking spontaneous saccades and evoked oculomotor reflexes in head-fixed rats (in both cases, we obtained measurements that are consistent with those found in the literature). Overall, given its fully automated nature and its intrinsic robustness against operator errors, we believe that our eye-tracking system enhances the utility of existing approaches to gaze-tracking in rodents and represents a valid tool for rodent vision studies."


  • Zoccolan DF, Graham BJ, Cox DD (2010) A self-calibrating, camera-based eye tracker for the recording of rodent eye movements. Frontiers in Neuroscience Methods. doi:10.3389/fnins.2010.00193 [link]

Thursday, February 3, 2011

EyeTech Digital Systems

Arizona-based EyeTech Digital Systems offers several interesting eye trackers where the new V1 caught my attention with its extended track-box of 25 x 18 x 50cm. The rather large depth range is provided through a custom auto focus mechanism developed in cooperation with Brigham Young University Dept. of Mechanical Engineering. This makes the device particularly suitable for larger displays such as public displays/digital signage, still the I'd imaging the calibration procedure to remain, ideally you'd want to walk up and interact/collect data automatically without any wizards or intervention. In any case, a larger trackbox is always welcome and it certainly opens up new opportunities. EyeTechs V1 offers 20cm more than most.





Wednesday, February 2, 2011

Spring eye tracker in action

More videos of the Spring eye tracker is available at the company website.

Thursday, January 13, 2011

Taiwanese Utechzone, the Spring gaze interaction system

UTechZone a Taiwanese company have launched the Spring gaze interaction system for individuals with ALS or similar conditions. It provides the basic functionality including text entry, email, web, media etc. in a format that reminds much of the MyTobii software. The tracker can be mounted in various ways including wheelchairs and desks with the accessories. A nice feature is the built in TV tuner which is accessible through the gaze interface. The performance of the actual tracking system and accuracy in gaze estimation is unknown, only specified to a 7x4 grid. Track-box is specified to 17cm x 10cm x 15cm with a working range of 55-70 cm.

The system runs on Windows XP and a computer equipped with an Intel Dual Core CPU, 2GB RAM, a 500GB HD combined with a 17" monitor.
Supported languages are Traditional Chinese, Simplified Chinese, English and Japanese. All countries with pretty big markets. Price unknown but probably less than a Tobii. Get the product brochure (pdf). 



Call for papers: UBICOMM 2011

"The goal of the International Conference on Mobile Ubiquitous Computing, Systems, Services and Technologies, UBICOMM 2011, is to bring together researchers from the academia and practitioners from the industry in order to address fundamentals of ubiquitous systems and the new applications related to them. The conference will provide a forum where researchers shall be able to present recent research results and new research problems and directions related to them. The conference seeks contributions presenting novel research in all aspects of ubiquitous techniques and technologies applied to advanced mobile applications."   All tracks/topics are open to both research and industry contributions. More info.
Tracks:
  • Fundamentals
  • Mobility
  • Information Ubiquity
  • Ubiquitous Multimedia Systems and Processing
  • Wireless Technologies
  • Web Services
  • Ubiquitous networks
  • Ubiquitous devices and operative systems
  • Ubiquitous mobile services and protocols
  • Ubiquitous software and security
  • Collaborative ubiquitous systems
  • User and applications
Deadlines:
  • Submission (full paper) June 20, 2011
  • Notification July 31, 2011
  • Registration August 15, 2011
  • Camera ready August 20, 2011

Face tracking for 3D displays without glasses.

A number of manufacturers and research institutes have presented 3D display systems that utilizes real time face and eye region tracking in order to adjust the stereoscopic display in real time. This means that viewers doesn't have to wear any funky glasses to see the 3D content which has been a limiting factor for these displays. Some prototypes and OEM solutions were introduced at CEBIT last year. At CES2011 Toshiba presented a 3D equipped laptop that uses the built-in webcam to track the position of the users face (appears to be built around Seeingmachines faceAPI). It's an interesting development, we're seeing more and more of computer vision applications in the consumer space, recently Microsoft announced that they've sold 8 million Kinect devices in the first 60 days while Sony shipped 4.1 million Playstation Move in the first two months.


3D displays sans glasses at CEBIT2010


Toshibas 3D laptop sans glasses at CES2011.

Obviously, these systems differ from eye tracking systems but still share many concepts. So whats the limiting factor for consumer eye tracking then? 1) Lack of applications, there isn't a clear compelling reason for most consumers to get an eye tracker. It has to provide a new experience with a clear advantage and value. Doing something faster, easier or in a way that couldn't be done before. 2) Expensive hardware, they are professional devices manufactured in low volume with the use of high quality, expensive components 3) No guarantees, doesn't work for all customers in all environments. How do you sell something that only works under specific conditions for say 90% of the customers?

Eye HDR: gaze-adaptive system for displaying high-dynamic-range images (Rahardja et al)

"How can high dynamic range (HDR) images like those captured by human vision be most effectively reproduced? Susanto Rahardja, head of the Signal Processing Department at the A*STAR Institute for Infocomm Research (I2R), hit upon the idea of simulating the human brain’s mechanism for HDR vision. “We thought about developing a dynamic display system that could naturally and interactively adapt as the user’s eyes move around a scene, just as the human visual system changes as our eyes move around a real scene,” he says.
Two years ago, Rahardja initiated a program on HDR display bringing together researchers with a vriety of backgrounds. “We held a lot of brainstorming sessions to discuss how the human visual system perceives various scenes with different levels of brightness,” says Farzam Farbiz, a senior research fellow of the Signal Processing Department. They also read many books on cerebral physiology to understand how receptors in the retina respond to light and convert the data into electric signals, which are then transmitted to retinal ganglion cells and other neural cells through complex pathways in the visual cortex.
The EyeHDR system employs a commercial eye-tracker device that follows the viewer’s eyes and records the eyes’ reflection patterns. Using this data, the system calculates and determines the exact point of the viewer’s gaze on the screen using special ‘neural network’ algorithms the team has developed.


“On top of that, we also had to simulate the transitional latency of human eyes,” says Corey Manders, a senior research fellow of the Signal Processing Department. “When you move your gaze from a dark part of the room to a bright window, our eyes take a few moments to adjust before we can see clearly what’s outside,” adds Zhiyong Huang, head of the Computer Graphics and Interface Department. “This is our real natural experience, and our work is to reproduce this on-screen.”

The EyeHDR system calculates the average luminance of the region where the observer is gazing, and adjusts the intensity and contrast to optimal levels with a certain delay, giving the viewer the impression of a real scene. The system also automatically tone-maps the HDR images to low dynamic range (LDR) images in regions outside of the viewers gaze. Ultimately, the EyeHDR system generates multiple images in response to the viewer’s gaze, which contrasts with previous attempts to achieve HDR through the generation of a single, perfect HDR display image.


The researchers say development of the fundamental technologies for the system is close to complete, and the EyeHDR system’s ability to display HDR images on large LDR screens has been confirmed. But before the system can become commercially available, the eye-tracking devices will need to be made more accurate, robust and easier to use. As the first step toward commercialization, the team demonstrated the EyeHDR system at SIGGRAPH Asia 2009, an annual international conference and exhibition on digital content, held in Yokohama, Japan in December last year.
Although the team’s work is currently focused on static images, they have plans for video. “We would like to apply our technologies for computer gaming and other moving images in the future. We are also looking to reduce the realism gap between real and virtual scenes in emergency response simulation, architecture and science,” Farbiz says". (source)
  • Susanto Rahardja, Farzam Farbiz, Corey Manders, Huang Zhiyong, Jamie Ng Suat Ling, Ishtiaq Rasool Khan, Ong Ee Ping, and Song Peng. 2009. Eye HDR: gaze-adaptive system for displaying high-dynamic-range images. In ACM SIGGRAPH ASIA 2009 Art Gallery & Emerging Technologies: Adaptation (SIGGRAPH ASIA '09). ACM, New York, NY, USA, 68-68. DOI=10.1145/1665137.1665187. (pdf, it's a one page poster)

Monday, January 10, 2011

Call for papers: ACIVS 2011

Acivs 2011 is a conference focusing on techniques for building adaptive, intelligent, safe and secure imaging systems. Acivs 2011 consists of four days of lecture sessions, both regular (25 mns) and invited presentations, poster sessions. The conference will take place in the Het Pand, Ghent, Belgium on Aug. 22-25 2011.

Topics

  • Vision systems, including multi-camera systems
  • Image and Video Processing (linear/non-linear filtering and enhancement, restoration, segmentation, wavelets and multiresolution, Markovian techniques, color processing, modeling, analysis, interpolation and spatial transforms, motion, fractals and multifractals, structure from motion, information geometry)
  • Pattern Analysis (shape analysis, data and image fusion, pattern matching, neural nets, learning, grammatical techniques) and Content-Based Image Retrieval
  • Remote Sensing (techniques for filtering, enhancing, compressing, displaying and analyzing optical, infrared, radar, multi- and hyperspectral airborne and spaceborne images)
  • Still Image and Video Coding and Transmission (still image/video coding, model-based coding, synthetic/natural hybrid coding, quality metrics, image and video protection, image and video databases, image search and sorting, video indexing, multimedia applications)
  • System Architecture and Performance Evaluation (implementation of algorithms, GPU implementation, benchmarking, evaluation criteria, algorithmic evaluation)
Proceedings
The proceedings of Acivs 2011 will be published by Springer Verlag in the Lecture Notes in Computer Science series. LNCS is published, in parallel to the printed books, in full-text electronic form via Springer Verlag's internet platform

Deadlines
February 11, 2011Full paper submission
April 15, 2011Notification of acceptance
May 15, 2011Camera-ready papers due
May 15, 2011Registration deadline for authors of accepted papers
June 30, 2011Early registration deadline
Aug. 22-25 2011Acivs 2011