Showing posts with label attentive interface. Show all posts
Showing posts with label attentive interface. Show all posts
Friday, June 8, 2012
Eyecatcher - A 3D prototype combining Eyetracking with a Gestural Camera
Eyecatcher is a prototype combining eyetracking with a gestural camera on a dual screen setup. Created for the Oilrig process industry, this project was a collaborative exploration between ABB Corporate Research and Interactive Institute Umeå (blog).
Thursday, January 13, 2011
Eye HDR: gaze-adaptive system for displaying high-dynamic-range images (Rahardja et al)
"How can high dynamic range (HDR) images like those captured by human vision be most effectively reproduced? Susanto Rahardja, head of the Signal Processing Department at the A*STAR Institute for Infocomm Research (I2R), hit upon the idea of simulating the human brain’s mechanism for HDR vision. “We thought about developing a dynamic display system that could naturally and interactively adapt as the user’s eyes move around a scene, just as the human visual system changes as our eyes move around a real scene,” he says.
Two years ago, Rahardja initiated a program on HDR display bringing together researchers with a vriety of backgrounds. “We held a lot of brainstorming sessions to discuss how the human visual system perceives various scenes with different levels of brightness,” says Farzam Farbiz, a senior research fellow of the Signal Processing Department. They also read many books on cerebral physiology to understand how receptors in the retina respond to light and convert the data into electric signals, which are then transmitted to retinal ganglion cells and other neural cells through complex pathways in the visual cortex.
The EyeHDR system employs a commercial eye-tracker device that follows the viewer’s eyes and records the eyes’ reflection patterns. Using this data, the system calculates and determines the exact point of the viewer’s gaze on the screen using special ‘neural network’ algorithms the team has developed.
“On top of that, we also had to simulate the transitional latency of human eyes,” says Corey Manders, a senior research fellow of the Signal Processing Department. “When you move your gaze from a dark part of the room to a bright window, our eyes take a few moments to adjust before we can see clearly what’s outside,” adds Zhiyong Huang, head of the Computer Graphics and Interface Department. “This is our real natural experience, and our work is to reproduce this on-screen.”
The EyeHDR system calculates the average luminance of the region where the observer is gazing, and adjusts the intensity and contrast to optimal levels with a certain delay, giving the viewer the impression of a real scene. The system also automatically tone-maps the HDR images to low dynamic range (LDR) images in regions outside of the viewers gaze. Ultimately, the EyeHDR system generates multiple images in response to the viewer’s gaze, which contrasts with previous attempts to achieve HDR through the generation of a single, perfect HDR display image.
The researchers say development of the fundamental technologies for the system is close to complete, and the EyeHDR system’s ability to display HDR images on large LDR screens has been confirmed. But before the system can become commercially available, the eye-tracking devices will need to be made more accurate, robust and easier to use. As the first step toward commercialization, the team demonstrated the EyeHDR system at SIGGRAPH Asia 2009, an annual international conference and exhibition on digital content, held in Yokohama, Japan in December last year.
Although the team’s work is currently focused on static images, they have plans for video. “We would like to apply our technologies for computer gaming and other moving images in the future. We are also looking to reduce the realism gap between real and virtual scenes in emergency response simulation, architecture and science,” Farbiz says". (source)
- Susanto Rahardja, Farzam Farbiz, Corey Manders, Huang Zhiyong, Jamie Ng Suat Ling, Ishtiaq Rasool Khan, Ong Ee Ping, and Song Peng. 2009. Eye HDR: gaze-adaptive system for displaying high-dynamic-range images. In ACM SIGGRAPH ASIA 2009 Art Gallery & Emerging Technologies: Adaptation (SIGGRAPH ASIA '09). ACM, New York, NY, USA, 68-68. DOI=10.1145/1665137.1665187. (pdf, it's a one page poster)
0
comments
Labels:
attentive interface,
hci,
inspiration,
prototype
Tuesday, August 10, 2010
Eye control for PTZ cameras in video surveillance
Bartosz Kunka, a PhD student at the Gdańsk University of Technology have employed a remote gaze-tracking system called Cyber-Eye to control PTZ cameras in video surveillance and video-conference systems. The movie prepared for system presentation on Research Challange at SIGGRAPH 2010 in Los Angeles.
0
comments
Labels:
attentive interface,
eye tracker,
hci,
navigation,
security,
zoom
Monday, March 29, 2010
Text 2.0 gaze assisted reading
From the German Research Center for Artificial Intelligence comes a new demonstration of a gaze based reading system, Text 2.0, which utilizes eye tracking for making the reading experience more dynamic and interactive. For example the system can display images relevant to what your reading about or filter out less relevant information if your skimming through the content. The research is funded through the Stiftung Rheinland-Pfalz für Innovation. On the groups website you can also find an interesting project called PEEP which allows developers to connect eye trackers to Processing which enables aesthetically stunning visualizations. This platform is the core of the Text2.0 platform. Check out the videos.
More information:
Zdf.de: Wenn das auge die seite umblaettert?
Wired: Eye-Tracking Tablets and the Promise of Text 2.0
More demos at the groups website
More information:
Zdf.de: Wenn das auge die seite umblaettert?
Wired: Eye-Tracking Tablets and the Promise of Text 2.0
More demos at the groups website
0
comments
Labels:
attentive interface,
framework,
hci,
inspiration,
prototype
Monday, September 14, 2009
GaZIR: Gaze-based Zooming Interface for Image Retrieval (Kozma L., Klami A., Kaski S., 2009)
From the Helsinki Institute for Information Technology, Finland, comes a research prototype called GaZIR for gaze based image retrieval built by Laszlo Kozma, Arto Klami and Samuel Kaski. The GaZIR prototype uses a light-weight logistic regression model as a mechanism for predicting relevance based on eye movement data (such as viewing time, revisit counts, fixation length etc.) All occurring on-line in real time. The system is build around the PicSOM (paper) retrieval engine which is based on tree structured self-organizing maps (TS-SOMs). When provided a set of reference images the PicSOM engine goes online to download a set of similar images (based on color, texture or shape)
Abstract
"We introduce GaZIR, a gaze-based interface for browsing and searching for images. The system computes on-line predictions of relevance of images based on implicit feedback, and when the user zooms in, the images predicted to be the most relevant are brought out. The key novelty is that the relevance feedback is inferred from implicit cues obtained in real-time from the gaze pattern, using an estimator learned during a separate training phase. The natural zooming interface can be connected to any content-based information retrieval engine operating on user feedback. We show with experiments on one engine that there is sufficient amount of information in the gaze patterns to make the estimated relevance feedback a viable choice to complement or even replace explicit feedback by pointing-and-clicking."
Abstract
"We introduce GaZIR, a gaze-based interface for browsing and searching for images. The system computes on-line predictions of relevance of images based on implicit feedback, and when the user zooms in, the images predicted to be the most relevant are brought out. The key novelty is that the relevance feedback is inferred from implicit cues obtained in real-time from the gaze pattern, using an estimator learned during a separate training phase. The natural zooming interface can be connected to any content-based information retrieval engine operating on user feedback. We show with experiments on one engine that there is sufficient amount of information in the gaze patterns to make the estimated relevance feedback a viable choice to complement or even replace explicit feedback by pointing-and-clicking."
Fig1. "Screenshot of the GaZIR interface. Relevance feedback gathered from outer rings influences the images retrieved for the inner rings, and the user can zoom in to reveal more rings."
Fig2. "Precision-recall and ROC curves for userindependent relevance prediction model. The predictions (solid line) are clearly above the baseline of random ranking (dash-dotted line), showing that relevance of images can be predicted from eye movements. The retrieval accuracy is also above the baseline provided by a naive model making a binary relevance judgement based on whether the image was viewed or not (dashed line), demonstrating the gain from more advanced gaze modeling."
Fig 3. "Retrieval performance in real user experiments. The bars indicate the proportion of relevant images shown during the search in six different search tasks for three different feedback methods. Explicit denotes the standard point-and-click feedback, predicted means implicit feedback inferred from gaze, and random is the baseline of providing random feedback. In all cases both actual feedback types outperform the baseline, but the relative performance of explicit and implicit feedback depends on the search task."
- László Kozma, Arto Klami, and Samuel Kaski: GaZIR: Gaze-based Zooming Interface for Image Retrieval. To appear in Proceedings of 11th Conference on Multimodal Interfaces and The Sixth Workshop on Machine Learning for Multimodal Interaction (ICMI-MLMI), Boston, MA, USA, Novermber 2-6, 2009. (abstract, pdf)
Monday, November 17, 2008
Wearable Augmented Reality System using Gaze Interaction (Park et al., 2008)
Hyung Min Park, Seok Han Lee and Jong Soo Choi from the Graduate School of Advanced Imaging Science, Multimedia & Film at the University of Chung-Ang, Korea presented a paper on their Wearable Augmented Reality System (WARS) at the 7th IEEE/ACM International Symposium on Mixed and Augmented Reality. They use a half-blink mode (called "aging") for selection which is detected by their custom eye tracking algorithms. See the end of the video.
Abstract
Undisturbed interaction is essential to provide immersive AR environments. There have been a lot of approaches to interact with VEs (virtual environments) so far, especially in hand metaphor. When the user‟s hands are being used for hand-based work such as maintenance and repair, necessity of alternative interaction technique has arisen. In recent research, hands-free gaze information is adopted to AR to perform original actions in concurrence with interaction. [3, 4]. There has been little progress on that research, still at a pilot study in a laboratory setting. In this paper, we introduce such a simple WARS(wearable augmented reality system) equipped with an HMD, scene camera, eye tracker. We propose „Aging‟ technique improving traditional dwell-time selection, demonstrate AR gallery – dynamic exhibition space with wearable system.
Download paper as PDF.Thursday, August 28, 2008
Mixed reality systems for technical maintenance and gaze-controlled interaction (Gustafsson et al)
To follow up on the wearable display with an integrated eye tracker one possible application is in the domain of mixed reality. This allows for interfaces to be projected on top of a video stream (ie. the "world view") Thus blending the physical and virtual world. The paper below investigates how this could be used to assist technical maintenance of advanced systems such as fighter jets. It´s an early prototype but the field is very promising especially when an eye tracker is involved.
Abstract:
"The purpose of this project is to build up knowledge about how future Mixed Reality (MR) systems should be designed concerning technical solutions, aspects of Human-Machine-Interaction (HMI) and logistics. The report describes the work performed in phase2. Regarding hardware a hand-held MR-unit, a wearable MR-system and a gaze-controlled MR-unit have been developed. The work regarding software has continued with the same software architecture and MR-tool as in the former phase 1. A number of improvements, extensions and minor changes have been conducted as well as a general update. The work also includes experiments with two test case applications, "Turn-Round af Gripen (JAS) and "Starting Up Diathermy Apparatus" Comprehensive literature searches and surveys of knowledge of HMI aspects have been conducted, especially regarding gaze-controlled interaction. The report also includes a brief overview of ohter projects withing the area of Mixed Reality."
- Gustafsson, T., Carleberg, P., Svensson, P., Nilsson, S., Le Duc, M., Sivertun, Å., Mixed Reality Systems for Technical Maintenance and Gaze-Controlled Interaction. Progress Report Phase 2 to FMV., 2005. Download paper as PDF
Saturday, February 23, 2008
Talk: Sensing user attention (R. Vertegaal)
Stumbled upon a talk by Roel Vertegaal at Google Techtalk describing various projects at the Queens University Human Media Lab, many of which are using eye tracking technology. In general, applies knowledge from cognitive science on attention and communication onto practical Human-Computer Interaction interfaces applications. Overall nice 40 minute talk. Enjoy.
Abstract
Over the past few years, our work has centered around the development of computing technologies that are sensitive to what is perhaps the most important contextual cue for interacting with humans that exists: the fabric of their attention. Our research group has studied how humans communicate attention to navigate complex scenarios, such as group decision making. In the process, we developed many different prototypes of user interfaces that sense the users' attention, so as to be respectful players that share this most important resource with others. One of the most immediate methods for sensing human attention is to detect what object the eyes look at. The eye contact sensors our company has developed for this purpose work at long range, with great head movement tolerance, and many eyes. They do not require any personal calibration or coordinate system to function. Today I will announce Xuuk's first product, EyeBox2, a viewing statistics sensor that works at up to 10 meters. EyeBox2 allows the deployment of algorithms similar to Google's PageRank in the real world, where anything can now be ranked according to the attention it receives. This allows us, for example, to track mass consumer interest in products or ambient product advertisements. I will also illustrate how EyeBox2 ties into our laboratory's research on interactive technologies, showing prototypes of attention sensitive telephones, attentive video blogging glasses, speech recognition appliances as well as the world's first attentive hearing aid.
Roel Vertegaal is the director of the Human Media Lab at the Queen's University in Kingston, Canada. Roel is the founder of Xuuk which offers the EyeBox2, a remote eye tracker that works on up to 10 meters distance (currently $1500) and associated analysis software.
Abstract
Over the past few years, our work has centered around the development of computing technologies that are sensitive to what is perhaps the most important contextual cue for interacting with humans that exists: the fabric of their attention. Our research group has studied how humans communicate attention to navigate complex scenarios, such as group decision making. In the process, we developed many different prototypes of user interfaces that sense the users' attention, so as to be respectful players that share this most important resource with others. One of the most immediate methods for sensing human attention is to detect what object the eyes look at. The eye contact sensors our company has developed for this purpose work at long range, with great head movement tolerance, and many eyes. They do not require any personal calibration or coordinate system to function. Today I will announce Xuuk's first product, EyeBox2, a viewing statistics sensor that works at up to 10 meters. EyeBox2 allows the deployment of algorithms similar to Google's PageRank in the real world, where anything can now be ranked according to the attention it receives. This allows us, for example, to track mass consumer interest in products or ambient product advertisements. I will also illustrate how EyeBox2 ties into our laboratory's research on interactive technologies, showing prototypes of attention sensitive telephones, attentive video blogging glasses, speech recognition appliances as well as the world's first attentive hearing aid.
Roel Vertegaal is the director of the Human Media Lab at the Queen's University in Kingston, Canada. Roel is the founder of Xuuk which offers the EyeBox2, a remote eye tracker that works on up to 10 meters distance (currently $1500) and associated analysis software.
Subscribe to:
Posts (Atom)