Monday, November 17, 2008
Wearable Augmented Reality System using Gaze Interaction (Park et al., 2008)
Tuesday, November 11, 2008
Gaze vs. Mouse in Games: The Effects on User Experience (Gowases T, Bednarik R, Tukiainen M)
"We did a simple questionnaire-based analysis. The results of the analysis show some promises for implementing gaze-augmented problem-solving interfaces. Users of gaze-augmented interaction felt more immersed than the users of other two modes - dwell-time based and computer mouse. Immersion, engagement, and user-experience in general are important aspects in educational interfaces; learners engage in completing the tasks and, for example, when facing a difficult task they do not give up that easily. We also did analysis of the strategies, and we will report on those soon. We could not attend the conference, but didn’t want to disappoint eventual audience. We thus decided to send a video instead of us. " (from Romans blog)
Abstract
Some of this research has also been presented within the COGAIN association, see:
- Gowases Tersia (2007) Gaze vs. Mouse: An evaluation of user experience and planning in problem solving games. Master’s thesis May 2, 2007. Department of Computer Science, University of Joensuu, Finland. Download as PDF
Monday, November 3, 2008
The Conductor Interaction Method (Rachovides et al)
"This article proposes an alternative interaction method, the conductor interaction method (CIM), which aims to provide a more natural and easier-to-learn interaction technique. This novel interaction method extends existing HCI methods by drawing upon techniques found in human-human interaction. It is argued that the use of a two-phased multimodal interaction mechanism, using gaze for selection and gesture for manipulation, incorporated within a metaphor-based environment, can provide a viable alternative for interacting with a computer (especially for novice users). Both the model and an implementation of the CIM within a system are presented in this article. This system formed the basis of a number of user studies that have been performed to assess the effectiveness of the CIM, the findings of which are discussed in this work.
More specifically the CIM aims to provide the following.
—A More Natural Interface. The CIM will have an interface that utilizes gaze and gestures, but is nevertheless capable of supporting sophisticated activities. The CIM provides an interaction technique that is as natural as possible and is close to the human-human interaction methods with which users are already familiar. The combination of gaze and gestures allows the user to perform not only simple interactions with a computer, but also more complex interacones such as the selecting, editing, and placing of media objects.
—A Metaphor Supported Interface. In order to help the user understand and exploit the gaze and gesture interface, two metaphors have been developed. An orchestra metaphor is used to provide the environment in which the user interacts. A conductor metaphor is used for interacting within this environment. These two metaphors are discussed next.
—A Two-Phased Interaction Method. The CIM uses an interaction process where each modality is specific and has a particular function. The interaction between user and interface can be seen as a dialog that is comprised of two phases. In the first phase, the user selects the on-screen object by gazing at it. In the second phase, with the gesture interface the user is able to manipulate the selected object. These distinct functions of gaze and gesture aim to increase system usability, as they are based on human-human interaction techniques, and also help to overcome issues such as the Midas Touch problem that often experienced by look-and-dwell systems. As the dialog combines two modalities in sequence, the gaze interface can be disabled after the first phase. This minimizes the possibility of accidentally selecting objects through the gaze interface. The Midas Touch problem can also be further addressed by ensuring that there is ample dead space between media objects.
—Significantly Reduced Learning Overhead. The CIM aims to reduce the overhead of learning to use the system by encouraging the use of gestures that users can easily associate with activities they perform in their everyday life. This transfer of experience can lead to a smaller learning overhead [Borchers 1997], allowing users to make the most of the system’s features in a shorter time.
- Rachovides, D., Walkerdine, J., and Phillips, P. 2007. The conductor interaction method. ACM Trans. Multimedia Comput. Commun. Appl. 3, 4 (Dec. 2007), 1-23. DOI= http://doi.acm.org/10.1145/1314303.1314312
Gaze and Voice Based Game Interaction (Wilcox et al., 2008)
Their work was presented at the ACM SIGGRAPH 2008 with the associated poster:
- Wilcox, T., Evans, M., Pearce, C., Pollard, N., and Sundstedt, V. 2008. Gaze and voice based game interaction: the revenge of the killer penguins. In ACM SIGGRAPH 2008 Posters (Los Angeles, California, August 11 - 15, 2008).
Sunday, October 26, 2008
Low cost open source eye tracking from Argentina
The project is running for another three weeks and the outcome will be very interesting. Check out the development blog at http://www.eyegazetracking.com/
Thursday, September 18, 2008
The Inspection of Very Large Images by Eye-gaze Control
"The researchers presented novel methods for navigating and inspecting extremely large images solely or primarily using eye gaze control. The need to inspect large images occurs in, for example, mapping, medicine, astronomy and surveillance, and this project considered the inspection of very large aerial images, held in Google Earth. Comparative search and navigation tasks suggest that, while gaze methods are effective for image navigation, they lag behind more conventional methods, so interaction designers might consider combining these techniques for greatest effect." (BCS Interaction)
Abstract
The increasing availability and accuracy of eye gaze detection equipment has encouraged its use for both investigation and control. In this paper we present novel methods for navigating and inspecting extremely large images solely or primarily using eye gaze control. We investigate the relative advantages and comparative properties of four related methods: Stare-to-Zoom (STZ), in which control of the image position and resolution level is determined solely by the user's gaze position on the screen; Head-to-Zoom (HTZ) and Dual-to-Zoom (DTZ), in which gaze control is augmented by head or mouse actions; and Mouse-to-Zoom (MTZ), using conventional mouse input as an experimental control.
The need to inspect large images occurs in many disciplines, such as mapping, medicine, astronomy and surveillance. Here we consider the inspection of very large aerial images, of which Google Earth is both an example and the one employed in our study. We perform comparative search and navigation tasks with each of the methods described, and record user opinions using the Swedish User-Viewer Presence Questionnaire. We conclude that, while gaze methods are effective for image navigation, they, as yet, lag behind more conventional methods and interaction designers may well consider combining these techniques for greatest effect.
Monday, September 15, 2008
Apple develops gaze assisted interaction?
From the patent document:
"There are many possible applications that would benefit from the temporal fusion of gaze vectors with multi-touch movement data. For the purpose of example, one simple application will be discussed here: Consider a typical computer screen, which has several windows displayed. Assume that the user wishes to bring forward the window in the lower left corner, which is currently underneath two other windows. Without gaze vector fusion there are two means to do this, and both involve movement of the hand to another position. The first means is to move the mouse pointer over the window of interest and click the mouse button. The second means is to use a hot-key combination to cycle through the screen windows until the one of interest is brought forward. Voice input could also be used but it would be less efficient than the other means. With gaze vector fusion, the task is greatly simplified. For example, the user directs his gaze to the window of interest and then taps a specific chord on the multi-touch surface. The operation requires no translation of the hands and is very fast to perform."
"For another example, assume the user wishes to resize and reposition an iTunes window positioned in the upper left of a display screen. This can be accomplished using a multi-touch system by moving the mouse pointer into the iTunes window and executing a resize and reposition gesture. While this means is already an improvement over using just a mouse its efficiency can be further improved by the temporal fusion of gaze vector data. "
TeleGaze (Hemin, 2008)
Associated publications:
- Hemin Omer Latif, Nasser Sherkat and Ahmad Lotfi, "TeleGaze: Teleoperation through Eye Gaze", 7th IEEE International Conference on Cybernetic Intelligent Systems 2008, London, UK. Conference website: www.cybernetic.org.uk/cis2008
- Hemin Omaer Latif, Nasser Sherkat and Ahmad Lotfi, "Remote Control of Mobile Robots through Human Eye Gaze: The Design and Evaluation of an Interface", SPIE Europe Security and Defence 2008, Cardiff, UK. Conference website: http://spie.org/security-defence-europe.xml
COGAIN 2008 Proceedings now online
Contents
Overcoming Technical Challenges in Mobile and Other Systems
- Off-the-Shelf Mobile Gaze Interaction
J. San Agustin and J. P. Hansen, IT University of Copenhagen, Denmark
- Fast and Easy Calibration for a Head-Mounted Eye Tracker
C. Cudel, S Bernet, and M Basset, University of Haute Alsace, France
- Magic Environment
L. Figueiredo, T. Nunes, F. Caetano, and A. Gomes, ESTG/IPG, Portugal
- AI Support for a Gaze-Controlled Wheelchair
P. Novák, T. Krajník, L. Přeučil, M. Fejtová, and O. Štěpánková. Czech Technical University, Czech Republic)
- A Comparison of Pupil Centre Estimation Algorithms
D. Droege, C Schmidt, and D. Paulus University of Koblenz-Landau, Germany
- User Performance of Gaze-Based Interaction with On-line Virtual Communities
H. Istance, De Montfort University, UK, A. Hyrskykari, University of Tampere, Finland, S. Vickers, De Montfort University, UK and N. Ali, University of Tampere, Finland - Multimodal Gaze Interaction in 3D Virtual Environments
E. Castellina and F. Corno, Politecnico di Torino, Italy
- How Can Tiny Buttons Be Hit Using Gaze Only?
H. Skovsgaard, J. P. Hansen, IT University of Copenhagen, Denmark. J. Mateo, Wright State University, Ohio, US
- Gesturing with Gaze
H. Heikkilä, University of Tampere, Finland
- NeoVisus: Gaze Driven Interface Components
M. Tall, Sweden
- Evaluations of Interactive Guideboard with Gaze-Communicative Stuffed-Toy Robot
T. Yonezawa, H. Yamazoe, A. Utsumi, and S. Abe, ATR Intelligent Robotics and Communications Laboratories, Japan
- Gaze-Contingent Passwords at the ATM
P. Dunphy, A. Fitch, and P. Oliver, Newcastle University, UK
- Scrollable Keyboards for Eye Typing
O Špakov and P. Majaranta, University of Tampere, Finland
- The Use of Eye-Gaze Data in the Evaluation of Assistive Technology Software for Older People.
S. Judge, Barnsley District Hospital Foundation, UK and S. Blackburn, Sheffield University, UK
- A Case Study Describing Development of an Eye Gaze Setup for a Patient with 'Locked-in Syndrome' to Facilitate Communication, Environmental Control and Computer Access.
Z. Robertson and M. Friday, Barnsley General Hospital, UK