Thursday, January 13, 2011

Call for papers: UBICOMM 2011

"The goal of the International Conference on Mobile Ubiquitous Computing, Systems, Services and Technologies, UBICOMM 2011, is to bring together researchers from the academia and practitioners from the industry in order to address fundamentals of ubiquitous systems and the new applications related to them. The conference will provide a forum where researchers shall be able to present recent research results and new research problems and directions related to them. The conference seeks contributions presenting novel research in all aspects of ubiquitous techniques and technologies applied to advanced mobile applications."   All tracks/topics are open to both research and industry contributions. More info.
Tracks:
  • Fundamentals
  • Mobility
  • Information Ubiquity
  • Ubiquitous Multimedia Systems and Processing
  • Wireless Technologies
  • Web Services
  • Ubiquitous networks
  • Ubiquitous devices and operative systems
  • Ubiquitous mobile services and protocols
  • Ubiquitous software and security
  • Collaborative ubiquitous systems
  • User and applications
Deadlines:
  • Submission (full paper) June 20, 2011
  • Notification July 31, 2011
  • Registration August 15, 2011
  • Camera ready August 20, 2011

Face tracking for 3D displays without glasses.

A number of manufacturers and research institutes have presented 3D display systems that utilizes real time face and eye region tracking in order to adjust the stereoscopic display in real time. This means that viewers doesn't have to wear any funky glasses to see the 3D content which has been a limiting factor for these displays. Some prototypes and OEM solutions were introduced at CEBIT last year. At CES2011 Toshiba presented a 3D equipped laptop that uses the built-in webcam to track the position of the users face (appears to be built around Seeingmachines faceAPI). It's an interesting development, we're seeing more and more of computer vision applications in the consumer space, recently Microsoft announced that they've sold 8 million Kinect devices in the first 60 days while Sony shipped 4.1 million Playstation Move in the first two months.


3D displays sans glasses at CEBIT2010


Toshibas 3D laptop sans glasses at CES2011.

Obviously, these systems differ from eye tracking systems but still share many concepts. So whats the limiting factor for consumer eye tracking then? 1) Lack of applications, there isn't a clear compelling reason for most consumers to get an eye tracker. It has to provide a new experience with a clear advantage and value. Doing something faster, easier or in a way that couldn't be done before. 2) Expensive hardware, they are professional devices manufactured in low volume with the use of high quality, expensive components 3) No guarantees, doesn't work for all customers in all environments. How do you sell something that only works under specific conditions for say 90% of the customers?

Eye HDR: gaze-adaptive system for displaying high-dynamic-range images (Rahardja et al)

"How can high dynamic range (HDR) images like those captured by human vision be most effectively reproduced? Susanto Rahardja, head of the Signal Processing Department at the A*STAR Institute for Infocomm Research (I2R), hit upon the idea of simulating the human brain’s mechanism for HDR vision. “We thought about developing a dynamic display system that could naturally and interactively adapt as the user’s eyes move around a scene, just as the human visual system changes as our eyes move around a real scene,” he says.
Two years ago, Rahardja initiated a program on HDR display bringing together researchers with a vriety of backgrounds. “We held a lot of brainstorming sessions to discuss how the human visual system perceives various scenes with different levels of brightness,” says Farzam Farbiz, a senior research fellow of the Signal Processing Department. They also read many books on cerebral physiology to understand how receptors in the retina respond to light and convert the data into electric signals, which are then transmitted to retinal ganglion cells and other neural cells through complex pathways in the visual cortex.
The EyeHDR system employs a commercial eye-tracker device that follows the viewer’s eyes and records the eyes’ reflection patterns. Using this data, the system calculates and determines the exact point of the viewer’s gaze on the screen using special ‘neural network’ algorithms the team has developed.


“On top of that, we also had to simulate the transitional latency of human eyes,” says Corey Manders, a senior research fellow of the Signal Processing Department. “When you move your gaze from a dark part of the room to a bright window, our eyes take a few moments to adjust before we can see clearly what’s outside,” adds Zhiyong Huang, head of the Computer Graphics and Interface Department. “This is our real natural experience, and our work is to reproduce this on-screen.”

The EyeHDR system calculates the average luminance of the region where the observer is gazing, and adjusts the intensity and contrast to optimal levels with a certain delay, giving the viewer the impression of a real scene. The system also automatically tone-maps the HDR images to low dynamic range (LDR) images in regions outside of the viewers gaze. Ultimately, the EyeHDR system generates multiple images in response to the viewer’s gaze, which contrasts with previous attempts to achieve HDR through the generation of a single, perfect HDR display image.


The researchers say development of the fundamental technologies for the system is close to complete, and the EyeHDR system’s ability to display HDR images on large LDR screens has been confirmed. But before the system can become commercially available, the eye-tracking devices will need to be made more accurate, robust and easier to use. As the first step toward commercialization, the team demonstrated the EyeHDR system at SIGGRAPH Asia 2009, an annual international conference and exhibition on digital content, held in Yokohama, Japan in December last year.
Although the team’s work is currently focused on static images, they have plans for video. “We would like to apply our technologies for computer gaming and other moving images in the future. We are also looking to reduce the realism gap between real and virtual scenes in emergency response simulation, architecture and science,” Farbiz says". (source)
  • Susanto Rahardja, Farzam Farbiz, Corey Manders, Huang Zhiyong, Jamie Ng Suat Ling, Ishtiaq Rasool Khan, Ong Ee Ping, and Song Peng. 2009. Eye HDR: gaze-adaptive system for displaying high-dynamic-range images. In ACM SIGGRAPH ASIA 2009 Art Gallery & Emerging Technologies: Adaptation (SIGGRAPH ASIA '09). ACM, New York, NY, USA, 68-68. DOI=10.1145/1665137.1665187. (pdf, it's a one page poster)

Monday, January 10, 2011

Call for papers: ACIVS 2011

Acivs 2011 is a conference focusing on techniques for building adaptive, intelligent, safe and secure imaging systems. Acivs 2011 consists of four days of lecture sessions, both regular (25 mns) and invited presentations, poster sessions. The conference will take place in the Het Pand, Ghent, Belgium on Aug. 22-25 2011.

Topics

  • Vision systems, including multi-camera systems
  • Image and Video Processing (linear/non-linear filtering and enhancement, restoration, segmentation, wavelets and multiresolution, Markovian techniques, color processing, modeling, analysis, interpolation and spatial transforms, motion, fractals and multifractals, structure from motion, information geometry)
  • Pattern Analysis (shape analysis, data and image fusion, pattern matching, neural nets, learning, grammatical techniques) and Content-Based Image Retrieval
  • Remote Sensing (techniques for filtering, enhancing, compressing, displaying and analyzing optical, infrared, radar, multi- and hyperspectral airborne and spaceborne images)
  • Still Image and Video Coding and Transmission (still image/video coding, model-based coding, synthetic/natural hybrid coding, quality metrics, image and video protection, image and video databases, image search and sorting, video indexing, multimedia applications)
  • System Architecture and Performance Evaluation (implementation of algorithms, GPU implementation, benchmarking, evaluation criteria, algorithmic evaluation)
Proceedings
The proceedings of Acivs 2011 will be published by Springer Verlag in the Lecture Notes in Computer Science series. LNCS is published, in parallel to the printed books, in full-text electronic form via Springer Verlag's internet platform

Deadlines
February 11, 2011Full paper submission
April 15, 2011Notification of acceptance
May 15, 2011Camera-ready papers due
May 15, 2011Registration deadline for authors of accepted papers
June 30, 2011Early registration deadline
Aug. 22-25 2011Acivs 2011

Wednesday, December 22, 2010

Santa's been spotted - Introducing the SMI Glasses

What a year it has been in the commercial eye tracking domain. In June we had the Tobii glasses which was their entry into the head-mounted market which created some buzz online. This was followed by a high-speed remote system, the Tobii TX300, which was introduced in November. Both products competed directly with the offering from SMI which countered with the RED500 remote tracker, surpassing the Tobii system by 200 samples per second. Today it's my pleasure to introduce the SMI Glasses which brings up the competition a couple of notches. Being comparable in the neat, unobtrusive form factor they provide binocular tracking with a direct view of both eyes.
Rendered image of the upcoming SMI Glasses.
The small scene camera is located in the center of glasses which gives minimal parallax. Although the hard specs has yet to be released it is rumored to have a high resolution scene camera, long battery lifetime and an advanced IR AOA marker detection system which enables automatic mapping of gaze data to real-world objects. Furthermore, they can be used not only as blackbox system – but may be integrated with SMIs current head mounted devices, including live view, open interface for co-registration etc. Estimated availability is projected to the first half of 2011.

Thanks for all the hard work, inspiration and feedback throughout 2010, it's been an amazing year. By the looks of it 2011 appears to be a really interesting year for eye tracking. I'd like to wish everyone a Merry Christmas and a Happy New Year.

Tuesday, December 14, 2010

Method for Automatic Mapping of Eye Tracker Data to Hypermedia Content

Came across the United States Patent Application 20100295774 which has been filed by Craig Hennessey of Mirametrix. Essentially the system creates Regions Of Interest based on the HTML code (div-tags) to do an automatic mapping between gaze X&Y and the location of elements. This is done by accessing the Microsoft Document Object Model of an Intenet Explorer browser page to establish the "content tracker", a piece of software that generates the list of areas, their sizes and location on-screen which then are tagged with keywords (e.g logo, ad etc) This software will also keep track of several browser windows, their position and interaction state. 
"A system for automatic mapping of eye-gaze data to hypermedia content utilizes high-level content-of-interest tags to identify regions of content-of-interest in hypermedia pages. User's computers are equipped with eye-gaze tracker equipment that is capable of determining the user's point-of-gaze on a displayed hypermedia page. A content tracker identifies the location of the content using the content-of-interest tags and a point-of-gaze to content-of-interest linker directly maps the user's point-of-gaze to the displayed content-of-interest. A visible-browser-identifier determines which browser window is being displayed and identifies which portions of the page are being displayed. Test data from plural users viewing test pages is collected, analyzed and reported."
To conclude the idea is to have multiple clients equipped with eye trackers that communicates with a server. The central machine coordinates studies and stores the gaze data from each session (in the cloud?). Overall a strategy that makes perfect sense if your differentiating factor is low-cost. 

Monday, November 15, 2010

SMI RED500

Just days after the Tobii TX300 was launched SMI counters with the introduction of the the worlds first 500Hz remote binocular eye tracker. SMI seriously ramps up the competition in the high speed remote systems, surpassing the Tobii TX by a hefty 200Hz. The RED500 has a operating distance of 60-80cm with a 40x40 trackbox at 70cm with a reported accuracy of <0.4 degrees under typical (optimal?) settings. Real-world performance evaluation by independent third party remains to be seen. Not resting on their laurels SMI regains the king-of-the-hill position with an impressive achievement that demonstrates how competitive the field has become. See the technical specs for more information.

Exploring the potential of context-sensitive CADe in screening mammography (Tourassi et al, 2010)

Georgia D. Tourassi, Maciej A. Mazurowski, and Brian P. Harrawood at Duke University Ravin Advanced Imaging Laboratories in collaboration with Elizabeth A. Krupinski presents a novel method of combining eye gaze data with Computer-Assisted Detection algorithms to improve detection rates for malignant masses in mammography. This contextualized method holds a potential for personalized diagnostic support.

Purpose: Conventional computer-assisted detection CADe systems in screening mammography provide the same decision support to all users. The aim of this study was to investigate the potential of a context-sensitive CADe system which provides decision support guided by each user’s focus of attention during visual search and reporting patterns for a specific case.

Methods: An observer study for the detection of malignant masses in screening mammograms was conducted in which six radiologists evaluated 20 mammograms while wearing an eye-tracking device. Eye-position data and diagnostic decisions were collected for each radiologist and case they reviewed. These cases were subsequently analyzed with an in-house knowledge-based CADe system using two different modes:  conventional mode with a globally fixed decision threshold and context-sensitive mode with a location-variable decision threshold based on the radiologists’ eye dwelling data and reporting information.

Results: The CADe system operating in conventional mode had 85.7% per-image malignant mass sensitivity at 3.15 false positives per image FPsI . The same system operating in context-sensitive mode provided personalized decision support at 85.7%–100% sensitivity and 0.35–0.40 FPsI to all six radiologists. Furthermore, context-sensitive CADe system could improve the radiologists’ sensitivity and reduce their performance gap more effectively than conventional CADe.



Conclusions: Context-sensitive CADe support shows promise in delineating and reducing the radiologists’ perceptual and cognitive errors in the diagnostic interpretation of screening mammograms more effectively than conventional CADe.
  • G. D. Tourassi, M. A. Mazurowski, B. P. Harrawood and E. A. Krupinski, "Exploring the potential of context-sensitive CADe in screening mammography," Medical Physics 37, 5728-5736. Online, PDF

Monday, November 8, 2010

GazeCom and SMI demonstrates automotive guidance system



"In order to determine the effectiveness of gaze guidance, within the project, SMI developed an experimental driving simulator with integrated eye tracking technology.  A driving safety study in a city was set up and testing in that environment has shown that the number of accidents was significantly lower with gaze guidance than without, while most of the drivers didn’t consciously notice the guiding visual cues."
Christian Villwock, Director for Eye and Gaze Tracking Systems at SMI: “We have shown that visual performance can significantly be improved by gaze contingent gaze guidance. This introduces huge potential in applications where expert knowledge has to be transferred or safety is critical, for example for radiological image analysis.” 
Within the GazeCom project, funded by the EU within the Future and Emerging Technologies (FET) program, the impact of gaze guidance on what is perceived and communicated effectively has been determined in a broad range of tasks of varying complexity. This included basic research in the understanding of visual perception and brain function up to the level where the guidance of gaze becomes feasible." (source)

Wednesday, November 3, 2010

SMI Releases iViewX 2.5

Today SensoMotoric Instruments released a new version of their iViewX software which offers a number of fixes and software improvements. Download here.

Improvements
- NEW device: MEG 250
- RED5: improved tracking stability
- RED: improved pupil diameter calculation
- RED: improved distance measurement
- RED: improved 2 and 5 point calibration model
- file transfer server is installed with iView X now
- added configurable parallel port address

Fixes
- RED5 camera drop outs in 60Hz mode on Clevo Laptop
- initializes LPT_IO and PIODIO on startup correctly
- RED standalone mode can be used with all calibration methods via remote commands
- lateral offset in RED5 head position visualization
- HED: Use TimeStamp in [ms] as Scene Video Overlay
- improved rejection parameters for NNL Devices
- crash when using ET_CAL in standalone mode
- strange behaviour with ET_REM and eT_REM. Look up in command list is now case-insensitive.
- RED5: Default speed is 60Hz for RED and 250Hz for RED250
- and many more small fixes and improvements