The project is running for another three weeks and the outcome will be very interesting. Check out the development blog at http://www.eyegazetracking.com/
Sunday, October 26, 2008
Low cost open source eye tracking from Argentina
The project is running for another three weeks and the outcome will be very interesting. Check out the development blog at http://www.eyegazetracking.com/
Thursday, September 18, 2008
The Inspection of Very Large Images by Eye-gaze Control
"The researchers presented novel methods for navigating and inspecting extremely large images solely or primarily using eye gaze control. The need to inspect large images occurs in, for example, mapping, medicine, astronomy and surveillance, and this project considered the inspection of very large aerial images, held in Google Earth. Comparative search and navigation tasks suggest that, while gaze methods are effective for image navigation, they lag behind more conventional methods, so interaction designers might consider combining these techniques for greatest effect." (BCS Interaction)
Abstract
The increasing availability and accuracy of eye gaze detection equipment has encouraged its use for both investigation and control. In this paper we present novel methods for navigating and inspecting extremely large images solely or primarily using eye gaze control. We investigate the relative advantages and comparative properties of four related methods: Stare-to-Zoom (STZ), in which control of the image position and resolution level is determined solely by the user's gaze position on the screen; Head-to-Zoom (HTZ) and Dual-to-Zoom (DTZ), in which gaze control is augmented by head or mouse actions; and Mouse-to-Zoom (MTZ), using conventional mouse input as an experimental control.
The need to inspect large images occurs in many disciplines, such as mapping, medicine, astronomy and surveillance. Here we consider the inspection of very large aerial images, of which Google Earth is both an example and the one employed in our study. We perform comparative search and navigation tasks with each of the methods described, and record user opinions using the Swedish User-Viewer Presence Questionnaire. We conclude that, while gaze methods are effective for image navigation, they, as yet, lag behind more conventional methods and interaction designers may well consider combining these techniques for greatest effect.
Monday, September 15, 2008
Apple develops gaze assisted interaction?
From the patent document:
"There are many possible applications that would benefit from the temporal fusion of gaze vectors with multi-touch movement data. For the purpose of example, one simple application will be discussed here: Consider a typical computer screen, which has several windows displayed. Assume that the user wishes to bring forward the window in the lower left corner, which is currently underneath two other windows. Without gaze vector fusion there are two means to do this, and both involve movement of the hand to another position. The first means is to move the mouse pointer over the window of interest and click the mouse button. The second means is to use a hot-key combination to cycle through the screen windows until the one of interest is brought forward. Voice input could also be used but it would be less efficient than the other means. With gaze vector fusion, the task is greatly simplified. For example, the user directs his gaze to the window of interest and then taps a specific chord on the multi-touch surface. The operation requires no translation of the hands and is very fast to perform."
"For another example, assume the user wishes to resize and reposition an iTunes window positioned in the upper left of a display screen. This can be accomplished using a multi-touch system by moving the mouse pointer into the iTunes window and executing a resize and reposition gesture. While this means is already an improvement over using just a mouse its efficiency can be further improved by the temporal fusion of gaze vector data. "
TeleGaze (Hemin, 2008)
Associated publications:
- Hemin Omer Latif, Nasser Sherkat and Ahmad Lotfi, "TeleGaze: Teleoperation through Eye Gaze", 7th IEEE International Conference on Cybernetic Intelligent Systems 2008, London, UK. Conference website: www.cybernetic.org.uk/cis2008
- Hemin Omaer Latif, Nasser Sherkat and Ahmad Lotfi, "Remote Control of Mobile Robots through Human Eye Gaze: The Design and Evaluation of an Interface", SPIE Europe Security and Defence 2008, Cardiff, UK. Conference website: http://spie.org/security-defence-europe.xml
COGAIN 2008 Proceedings now online
Contents
Overcoming Technical Challenges in Mobile and Other Systems
- Off-the-Shelf Mobile Gaze Interaction
J. San Agustin and J. P. Hansen, IT University of Copenhagen, Denmark
- Fast and Easy Calibration for a Head-Mounted Eye Tracker
C. Cudel, S Bernet, and M Basset, University of Haute Alsace, France
- Magic Environment
L. Figueiredo, T. Nunes, F. Caetano, and A. Gomes, ESTG/IPG, Portugal
- AI Support for a Gaze-Controlled Wheelchair
P. Novák, T. Krajník, L. Přeučil, M. Fejtová, and O. Štěpánková. Czech Technical University, Czech Republic)
- A Comparison of Pupil Centre Estimation Algorithms
D. Droege, C Schmidt, and D. Paulus University of Koblenz-Landau, Germany
- User Performance of Gaze-Based Interaction with On-line Virtual Communities
H. Istance, De Montfort University, UK, A. Hyrskykari, University of Tampere, Finland, S. Vickers, De Montfort University, UK and N. Ali, University of Tampere, Finland - Multimodal Gaze Interaction in 3D Virtual Environments
E. Castellina and F. Corno, Politecnico di Torino, Italy
- How Can Tiny Buttons Be Hit Using Gaze Only?
H. Skovsgaard, J. P. Hansen, IT University of Copenhagen, Denmark. J. Mateo, Wright State University, Ohio, US
- Gesturing with Gaze
H. Heikkilä, University of Tampere, Finland
- NeoVisus: Gaze Driven Interface Components
M. Tall, Sweden
- Evaluations of Interactive Guideboard with Gaze-Communicative Stuffed-Toy Robot
T. Yonezawa, H. Yamazoe, A. Utsumi, and S. Abe, ATR Intelligent Robotics and Communications Laboratories, Japan
- Gaze-Contingent Passwords at the ATM
P. Dunphy, A. Fitch, and P. Oliver, Newcastle University, UK
- Scrollable Keyboards for Eye Typing
O Špakov and P. Majaranta, University of Tampere, Finland
- The Use of Eye-Gaze Data in the Evaluation of Assistive Technology Software for Older People.
S. Judge, Barnsley District Hospital Foundation, UK and S. Blackburn, Sheffield University, UK
- A Case Study Describing Development of an Eye Gaze Setup for a Patient with 'Locked-in Syndrome' to Facilitate Communication, Environmental Control and Computer Access.
Z. Robertson and M. Friday, Barnsley General Hospital, UK
Friday, September 12, 2008
COGAIN 2008 Video
Thursday, August 28, 2008
Mixed reality systems for technical maintenance and gaze-controlled interaction (Gustafsson et al)
Abstract:
- Gustafsson, T., Carleberg, P., Svensson, P., Nilsson, S., Le Duc, M., Sivertun, Å., Mixed Reality Systems for Technical Maintenance and Gaze-Controlled Interaction. Progress Report Phase 2 to FMV., 2005. Download paper as PDF
Sunday, August 24, 2008
Nokia Research: Near Eye Display with integrated eye tracker
- T. Järvenpää, V. Aaltonen (2008) Compact near-to-eye display with integrated gaze tracker. (SPIE Proceedings paper)
"Near-to-Eye Display (NED) offers a big screen experience to the user anywhere, anytime. It provides a way to perceive a larger image than the physical device itself is. Commercially available NEDs tend to be quite bulky and uncomfortable to wear. However, by using very thin plastic light guides with diffractive structures on the surfaces, many of the known deficiencies can be notably reduced. These Exit Pupil Expander (EPE) light guides enable a thin, light, user friendly and high performing see-through NED, which we have demonstrated. To be able to interact with the displayed UI efficiently, we have also integrated a video-based gaze tracker into the NED. The narrow light beam of an infrared light source is divided and expanded inside the same EPEs to produce wide collimated beams out from the EPE towards the eyes. Miniature video camera images the cornea and eye gaze direction is accurately calculated by locating the pupil and the glints of the infrared beams. After a simple and robust per-user calibration, the data from the highly integrated gaze tracker reflects the user focus point in the displayed image which can be used as an input device for the NED system. Realizable applications go from eye typing to playing games, and far beyond."
GaCIT in Tampere, day 5.
The ability to manipulate independent variables, enforce consistency and control are important concerns. For example running a web site test against the site online may produce faulty data since the content of the site may change for each visit. This is referred to as the stimuli sensitivity and increses in-between power since all subjects are exposed to exactly the same stimuli. Another issue is the task sensitivity. The task must reflect what the results are supposed to illustrate (ie. reading a text does not contain elements of manipulation. People are in general very task oriented, instructed to read they will ignore certain elements (eg. banners etc.)
A couple of real world examples including the Fluent UI (Office 2008), Phlat and Search Engine Results Pages (SERP) were introduced.
The Fluent UI is the new interface used in Office 2008. It resembles a big change compared with the traditional Office interface. The Fluent UI is task and context dependent compared to the rather static traditional setup of menubars and icons cluttering the screen.
At Microsoft it is common to work around personas in multiple categories. These are abstract representations of user groups that help to illustrate the lifes and needs for "typical" users. For example, Nicolas, is a tech-savvy IT professional while Jennifer is a young hip girl who spend a lot of time on YouTube or hang around town with her shiny iPod (err.. Zune that is)
More information on the use of personas as a design method:
- J. Grudin, J. Pruitt (2002) Personas, Participatory Design and Product Development: An Infrastructure for Engagement (Microsoft Research) Download as Word doc.
- J. Grudin (2006) Why Personas Work: The Psychological Evidence (Microsoft Research) Download as Word doc.
- Cutrell, E., Robbins, D.C., Dumais, S.T. & Sarin, R. (2006). Fast, flexible filtering with Phlat - Personal search and organization made easy. In Proceedings of CHI'06, Human Factors in Computing Systems, (Montréal, April 2006), ACM press, 261-270. Try Phlat!
Further reading:
- Cutrell, E. & Guan, Z. (2007). What are you looking for? An eye-tracking study of information usage in Web Search. In Proceedings of CHI'07, Human Factors in Computing Systems, (San José), ACM press, 407-416.
- Guan, Z. & Cutrell, E. (2007). An eye-tracking study of the effect of target rank on Web search. In Proceedings of CHI'07, Human Factors in Computing Systems, (San José), ACM press, 417-420.
GaCIT 2008. B. Velichovsky associated litterature
- Pannasch, S., Dornhoefer, S.M., Unema, P.J.A. & Velichkovsky, B.M. (2001). The omnipresent prolongation of visual fixations: saccades are inhibited by changes in situation and in subject's activity. Vision Research. 41(25-26), 3345-51. Download Full Article [PDF]
- Velichkovsky, B.M., Dornhoefer, S.M. , Kopf, M., Helmert, J. & Joos, M. (2002). Change detection and occlusion modes in static and dynamic road-traffic scenarios. Transportation Research, Part F. 5(2), 99-109. Download Full Article [PDF]
- Velichkovsky, B.M., Rothert, A., Kopf, M., Dornhoefer, S.M. & Joos, M. (2002). Towards an express diagnostics for level of processing and hazard perception. Transportation Research, Part F. 5(2), 145-156. Download Full Article [PDF]
- Unema, P., Pannasch, S., Joos, M. & Velichkovsky, B.M. (2005). Time-course of information processing during scene perception: The relationship between saccade amplitude and fixation duration. Visual Cognition, 12(3), 473-494. Download Full Article [PDF]
- Velichkovsky, B.M., Joos, M., Helmert, J.R., & Pannasch, S. (2005). Two visual systems and their eye movements: evidence from static and dynamic scene perception. CogSci 2005: Proceedings of the XXVII Conference of the Cognitive Science Society. July 21-23 Stresa, Italy, pp. 2283-2288. Download Full Article [PDF]
- Velichkovsky, B.M. (2005). Modularity of cognitive organization: Why it is so appealing and why it is wrong. In W. Callebaut & D. Rasskin-Gutman (Eds.), Modularity: Understanding the development and evolution of natural complex systems. Cambridge, MA: MIT Press.
- Graupner, S. T., Velichkovsky, B. M., Pannasch, S., & Marx, J. (2007). Surprise, surprise: Two distinct components in the visually evoked distractor effect. Psychophysiology, 44(2), 251-261. Download Full Article [PDF]
- Velichkovsky, B.M. (2007) Towards an Evolutionary Framework for Human Cognitive Neuroscience. Theoretical Biology, 2(1), 3-6. Download Full Article [PDF]
- Velichkovsky,B.M. (2002). Heterarchy of cognition: The depths and the highs of a framework for memory research. Memory, 10(5/6), 405-419 (Special Issue on Levels-of-Processing-Approach).