Friday, July 8, 2011
Gliding and Saccadic Gaze Gesture Recognition in Real Time (Rozado, 2011)
Monday, April 18, 2011
AutomotiveUI'11 - 3rd International Conference On Automotive User Interfaces and Interactive Vehicular Applications
The challenge that arises from the proliferation of in-car devices is that they may distract drivers from the primary task of driving, with possibly disastrous results. Thus, one of the major goals of this conference is to explore ways in which in-car user interfaces can be designed so as to lessen driver distraction while still enabling valuable services. This is challenging, especially given that the design of in-car devices, which was historically the responsibility of car manufacturers and their parts suppliers, is now a responsibility shared among a large and ever-changing group of parties. These parties include car OEMs, Tier 1 and Tier 2 suppliers of factory-installed electronics, as well as the manufacturers of hardware and software that is brought into the car, for example on personal navigation devices, smartphones, and tablets.
As we consider driving safety, our focus in designing in-car user interfaces should not be purely on eliminating distractions. In-car user interfaces also offer the opportunity to improve the driver¹s performance, for example by increasing her awareness of upcoming hazards. They can also enhance the experience of all kinds of passengers in the car. To this end, a further goal of AutomotiveUI 2011 is the exploration of in-car interfaces that address the varying needs of different types of users (including disabled drivers, elderly drivers or passengers, and the users of rear-seat entertainment systems). Overall our goal is to advance the state of the art in vehicular user experiences, in order to make cars both safer and more enjoyable places to spend time." http://www.auto-ui.org
Topics include, but are not limited to:
* new concepts for in-car user interfaces
* multimodal in-car user interfaces
* in-car speech and audio user interfaces
* text input and output while driving
* multimedia interfaces for in-car entertainment
* evaluation and benchmarking of in-car user interfaces
* assistive technology in the vehicular context
* methods and tools for automotive user interface research
* development methods and tools for automotive user interfaces
* automotive user interface frameworks and toolkits
* detecting and estimating user intentions
* detecting/measuring driver distraction and estimating cognitive load
* biometrics and physiological sensors as a user interface component
* sensors and context for interactive experiences in the car
* user interfaces for information access (search, browsing, etc.) while driving
* user interfaces for navigation or route guidance
* applications and user interfaces for inter-vehicle communication
* in-car gaming and entertainment
* different user groups and user group characteristics
* in-situ studies of automotive user interface approaches
* general automotive user experience research
* driving safety research using real vehicles and simulators
* subliminal techniques for workload reduction
SUBMISSIONS
AutomotiveUI 2011 invites submissions in the following categories:
* Papers (Submission Deadline: July 11th, 2011)
* Workshops (Submission Deadline: July 25th, 2011)
* Posters & Interactive Demos (Submission Deadline: Oct. 10th, 2011)
* Industrial Showcase (Submission Deadline: Oct. 10th, 2011)
For more information on the submission categories please check http://www.auto-ui.org/11/submit.php
Thursday, January 13, 2011
Eye HDR: gaze-adaptive system for displaying high-dynamic-range images (Rahardja et al)
- Susanto Rahardja, Farzam Farbiz, Corey Manders, Huang Zhiyong, Jamie Ng Suat Ling, Ishtiaq Rasool Khan, Ong Ee Ping, and Song Peng. 2009. Eye HDR: gaze-adaptive system for displaying high-dynamic-range images. In ACM SIGGRAPH ASIA 2009 Art Gallery & Emerging Technologies: Adaptation (SIGGRAPH ASIA '09). ACM, New York, NY, USA, 68-68. DOI=10.1145/1665137.1665187. (pdf, it's a one page poster)
Monday, August 16, 2010
Call for Papers: ACM Transactions Special Issue on Eye Gaze
Special Issue on Eye Gaze in Intelligent Human-Machine Interaction
Aims and Scope
Partly because of the increasing availability of nonintrusive and high-performance eye tracking devices, recent years have seen a growing interest in incorporating human eye gaze in intelligent user interfaces. Eye gaze has been used as a pointing mechanism in direct manipulation interfaces, for example, to assist users with “locked-in syndrome”. It has also been used as a reflection of information needs in web search and as a basis for tailoring information presentation. Detection of joint attention as indicated by eye gaze has been used to facilitate computer-supported human-human communication. In conversational interfaces, eye gaze has been used to improve language understanding and intention recognition. On the output side, eye gaze has been incorporated into the multimodal behavior of embodied conversational agents. Recent work on human-robot interaction has explored eye gaze in incremental language processing, visual scene processing, and conversation engagement and grounding.This special issue will report on state-of-the-art computational models, systems, and studies that concern eye gaze in intelligent and natural human-machine communication. The nonexhaustive list of topics below indicates the range of appropriate topics; in case of doubt, please contact the guest editors. Papers that focus mainly on eye tracking hardware and software as such will be relevant (only) if they make it clear how the advances reported open up new possibilities for the use of eye gaze in at least one of the ways listed above.
Topics
- Empirical studies of eye gaze in human-human communication that provide new insight into the role of eye gaze and suggest implications for the use of eye gaze in intelligent systems. Examples include new empirical findings concerning eye gaze in human language processing, in human-vision processing, and in conversation management.
- Algorithms and systems that incorporate eye gaze for human-computer interaction and human-robot interaction. Examples include gaze-based feedback to information systems; gaze-based attention modeling; exploiting gaze in automated language processing; and controlling the gaze behavior of embodied conversational agents or robots to enable grounding, turn-taking, and engagement.
- Applications that demonstrate the value of incorporating eye gaze in practical systems to enable intelligent human-machine communication.
Guest Editors
- Elisabeth André, University of Augsburg, Germany (contact: andre[at]informatik[dot]uni-augsburg.de)
- Joyce Chai, Michigan State University, USA
Important Dates
- By December 15th, 2010: Submission of manuscripts
- By March 23rd, 2011: Notification about decisions on initial submissions
- By June 23rd, 2011: Submission of revised manuscripts
- By August 25th, 2011: Notification about decisions on revised manuscripts
- By September 15th, 2011: Submission of manuscripts with final minor changes
- Starting October, 2011: Publication of the special issue on the TiiS website and subsequently in the ACM Digital Library and as a printed issue
Tuesday, August 10, 2010
Eye control for PTZ cameras in video surveillance
Monday, May 24, 2010
EyePhone - Mobil gaze interaction from University of Dartmouth
Abstract
As smartphones evolve researchers are studying new techniques to ease the human-mobile interaction. We propose EyePhone, a novel "hands free" interfacing system capable of driving mobile applications/functions using only the user's eyes movement and actions (e.g., wink). EyePhone tracks the user's eye movement across the phone's display using the camera mounted on the front of the phone; more speci cally, machine learning algorithms are used to: i) track the eye and infer its position on the mobile phone display as a user views a particular application; and ii) detect eye blinks that emulate mouse clicks to activate the target application under view. We present a prototype implementation of EyePhone on a Nokia 810, which is capable of tracking the position of the eye on the display, mapping this positions to a function that is activated by a wink. At no time does the user have to physically touch the phone display.
- Emiliano Miluzzo, Tianyu Wang, Andrew T. Campbell, EyePhone: Activating Mobile Phones With Your Eyes. To appear in Proc. of The Second ACM SIGCOMM Workshop on Networking, Systems, and Applications on Mobile Handhelds (MobiHeld'10), New Delhi, India, August 30, 2010. [pdf] [video]
Monday, March 29, 2010
Text 2.0 gaze assisted reading
More information:
Zdf.de: Wenn das auge die seite umblaettert?
Wired: Eye-Tracking Tablets and the Promise of Text 2.0
More demos at the groups website
Thursday, October 8, 2009
DoCoMo EOG update
Thanks Roman for the links!
Monday, September 28, 2009
Wearable Augmented Reality System using Gaze Interaction (Park, Lee & Choi)
Abstract
"Undisturbed interaction is essential to provide immersive AR environments. There have been a lot of approaches to interact with VEs (virtual environments) so far, especially in hand metaphor. When the user‟s hands are being used for hand-based work such as maintenance and repair, necessity of alternative interaction technique has arisen. In recent research, hands-free gaze information is adopted to AR to perform original actions in concurrence with interaction. [3, 4]. There has been little progress on that research, still at a pilot study in a laboratory setting. In this paper, we introduce such a simple WARS(wearable augmented reality system) equipped with an HMD, scene camera, eye tracker. We propose „Aging‟ technique improving traditional dwell-time selection, demonstrate AR gallery – dynamic exhibition space with wearable system."
- Park, H. M., Seok Han Lee, and Jong Soo Choi 2008. Wearable augmented reality system using gaze interaction. In Proceedings of the 2008 7th IEEE/ACM international Symposium on Mixed and Augmented Reality - Volume 00 (September 15 - 18, 2008). Symposium on Mixed and Augmented Reality. IEEE Computer Society, Washington, DC, 175-176. DOI= http://dx.doi.org/10.1109/ISMAR.2008.4637353
Monday, September 14, 2009
GaZIR: Gaze-based Zooming Interface for Image Retrieval (Kozma L., Klami A., Kaski S., 2009)
Abstract
"We introduce GaZIR, a gaze-based interface for browsing and searching for images. The system computes on-line predictions of relevance of images based on implicit feedback, and when the user zooms in, the images predicted to be the most relevant are brought out. The key novelty is that the relevance feedback is inferred from implicit cues obtained in real-time from the gaze pattern, using an estimator learned during a separate training phase. The natural zooming interface can be connected to any content-based information retrieval engine operating on user feedback. We show with experiments on one engine that there is sufficient amount of information in the gaze patterns to make the estimated relevance feedback a viable choice to complement or even replace explicit feedback by pointing-and-clicking."
- László Kozma, Arto Klami, and Samuel Kaski: GaZIR: Gaze-based Zooming Interface for Image Retrieval. To appear in Proceedings of 11th Conference on Multimodal Interfaces and The Sixth Workshop on Machine Learning for Multimodal Interaction (ICMI-MLMI), Boston, MA, USA, Novermber 2-6, 2009. (abstract, pdf)
Tuesday, August 11, 2009
ALS Society of British Columbia announces Engineering Design Awards (Canadian students only)
Project ideas:
- Low-cost eye tracker
- Issue: Current commercial eye-gaze tracking systems cost thousands to tens of thousands of dollars. The high cost of eye-gaze trackers prevents potential users from accessing eye- gaze tracking tools. The hardware components required for eye-gaze tracking do not justify the price and a lower-cost alternative is desirable. Webcams may be used for low-cost imaging, along with simple infrared diodes for system lighting. Alternatively, visible light systems may also be investigated. Opensource eye-gaze tracking software is also available. (ed: ITU GazeTracker, OpenEyes, Track Eye, OpenGazer and MyEye (free, no source)
- Goal: The goal of this design project is to develop a low-cost and usable eye-gaze tracking system based on simple commercial-of-the-shelf hardware.
- Deliverables: A working prototype of a functional, low-cost (< $200), eye-gaze tracking system.
- Issue: Current commercial eye-gaze tracking systems cost thousands to tens of thousands of dollars. The high cost of eye-gaze trackers prevents potential users from accessing eye- gaze tracking tools. The hardware components required for eye-gaze tracking do not justify the price and a lower-cost alternative is desirable. Webcams may be used for low-cost imaging, along with simple infrared diodes for system lighting. Alternatively, visible light systems may also be investigated. Opensource eye-gaze tracking software is also available. (ed: ITU GazeTracker, OpenEyes, Track Eye, OpenGazer and MyEye (free, no source)
- Eye-glasses compensation
- Deliverables: A working prototype of a functional, low-cost (< $200), eye-gaze tracking system
- Issue: The use of eye-glasses can cause considerable problems in eye-gaze tracking. The issue stems from reflections off the eye-glasses due to the use of controlled infrared lighting (on and off axis light sources) used to highlight features of the face. The key features of interest are the pupils and glints (or reflections of the surface of the cornea). Incorrectly identifying the pupils and glints then results in invalid estimation of the point-of-gaze.
- Goal: The goal of this design project is to develop techniques for either: 1) avoiding image corruption with eye-glasses on a commercial eye-gaze tracker, or 2) developing a controlled lighting scheme to ensure valid pupil and glints identification are identified in the presence of eye-glasses.
- Deliverables: Two forms of deliverables are possible: 1) A working prototype illustrating functional eye-gaze tracking in the presence of eye-glasses with a commercial eye-gaze tracker, or 2) A working prototype illustrating accurate real-time identification of the pupil and glints using controlled infrared lighting (on and off axis light sources) in the presence of eye-glasses.
- Innovative selection with ALS and eye gaze
- Issue: As mobility steadily decreases in the more advanced stages of ALS, alternative techniques for selection are required. Current solutions include head switches, sip and puff switches and dwell time activation depending on the degree of mobility loss to name a few. The use of dwell time requires no mobility other than eye motion, however, this technique suffers from ‘lag’ in that the user must wait the dwell time duration for each selection, as well as the ‘midas touch’ problem in which unintended selection if the gaze point is stationary for too long.
- Goal: The goal of this design project is to develop a technique for improved selection with eye-gaze for individuals with only eye-motion available. Possible solutions may involve novel HCI designs for interaction, including various adaptive and predictive technologies, the consideration of contextual cues, and the introduction of ancillary inputs, such as EMG, EEG.
- Deliverables: A working prototype illustrating eye-motion only selection with a commercial eye-gaze tracking system.
- Novel and valuable eye-gaze tracking applications and application enhancements
- Issue: To date, relatively few gaze-tracking applications have been developed. These include relatively simplistic applications such as the tedious typing of words, and even in such systems, little is done to ease the effort required, e.g., systems typically do not allow for the saving and reuse of words and sentences.
- Goal: The goal of this design project is to develop one or more novel applications or application enhancements that take gaze as input, and that provide new efficiencies or capabilities that could significantly improve the quality of life of those living with ALS.
- Deliverables: A working prototype illustrating one or more novel applications that take eye-motion as an input. The prototype must be developed and implemented to the extent that an evaluation of the potential efficiencies and/or reductions in effort can be evaluated by persons living with ALS and others on an evaluation panel.
See the Project Ideas for more information. For contact information see page two of the announcement.
Wednesday, July 22, 2009
Gaze Interaction in Immersive Virtual Reality - 3D Eye Tracking in Virtual Worlds
Publications
- Pfeiffer, T. (2008). Towards Gaze Interaction in Immersive Virtual Reality: Evaluation of a Monocular Eye Tracking Set-Up. In Virtuelle und Erweiterte Realität - Fünfter Workshop der GI-Fachgruppe VR/AR, 81-92. Aachen: Shaker Verlag GmbH. [Abstract] [BibTeX] [PDF]
- Pfeiffer, T., Latoschik, M.E. & Wachsmuth, I. (2008). Evaluation of Binocular Eye Trackers and Algorithms for 3D Gaze Interaction in Virtual Reality Environments. Journal of Virtual Reality and Broadcasting, 5 (16), dec. [Abstract] [BibTeX] [URL] [PDF]
- Pfeiffer, T., Donner, M., Latoschik, M.E. & Wachsmuth, I. (2007). 3D fixations in real and virtual scenarios. Journal of Eye Movement Research, Special issue: Abstracts of the ECEM 2007, 13.
- Pfeiffer, T., Donner, M., Latoschik, M.E. & Wachsmuth, I. (2007). Blickfixationstiefe in stereoskopischen VR-Umgebungen: Eine vergleichende Studie. In Vierter Workshop Virtuelle und Erweiterte Realität der GI-Fachgruppe VR/AR, 113-124. Aachen: Shaker. [Abstract] [BibTeX] [PDF]
List of all publications available here.
Tuesday, May 5, 2009
Gaze-Augmented Manual Interaction (Bieg, H.J, 2009)
"This project will demonstrate a new approach to employing users’ gaze in the context of human-computer interaction. This new approach uses gaze passively in order to improve the speed and precision of manually controlled pointing techniques. Designing such gaze augmented manual techniques requires an understanding of the principles that govern the coordination of hand and eye. This coordination is influenced by situational parameters (task complexity, input device used, etc.), which this project will explore in controlled experiments."
- Bieg, H. 2009. Gaze-augmented manual interaction. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 - 09, 2009). CHI EA '09. ACM, New York, NY, 3121-3124. DOI= http://doi.acm.org/10.1145/1520340.1520442
Friday, November 21, 2008
Eye movement control of remote robot
The virgin tour around the ITU office corridor (on YouTube)
Available on YouTube
Thursday, September 18, 2008
The Inspection of Very Large Images by Eye-gaze Control
"The researchers presented novel methods for navigating and inspecting extremely large images solely or primarily using eye gaze control. The need to inspect large images occurs in, for example, mapping, medicine, astronomy and surveillance, and this project considered the inspection of very large aerial images, held in Google Earth. Comparative search and navigation tasks suggest that, while gaze methods are effective for image navigation, they lag behind more conventional methods, so interaction designers might consider combining these techniques for greatest effect." (BCS Interaction)
Abstract
The increasing availability and accuracy of eye gaze detection equipment has encouraged its use for both investigation and control. In this paper we present novel methods for navigating and inspecting extremely large images solely or primarily using eye gaze control. We investigate the relative advantages and comparative properties of four related methods: Stare-to-Zoom (STZ), in which control of the image position and resolution level is determined solely by the user's gaze position on the screen; Head-to-Zoom (HTZ) and Dual-to-Zoom (DTZ), in which gaze control is augmented by head or mouse actions; and Mouse-to-Zoom (MTZ), using conventional mouse input as an experimental control.
The need to inspect large images occurs in many disciplines, such as mapping, medicine, astronomy and surveillance. Here we consider the inspection of very large aerial images, of which Google Earth is both an example and the one employed in our study. We perform comparative search and navigation tasks with each of the methods described, and record user opinions using the Swedish User-Viewer Presence Questionnaire. We conclude that, while gaze methods are effective for image navigation, they, as yet, lag behind more conventional methods and interaction designers may well consider combining these techniques for greatest effect.
Saturday, August 23, 2008
GaCIT in Tampere, day 3.
Games
This is an area for gaze interaction which have a high potential and since the gaming industry has grown to be a hugh industy it may help to make eye trackers accessible/affordable. The development would be benificial for users with motor impairments. A couple of examples for implementations were then introduced. The first one was a first person shoother running on a XBOX360:
The experimental setup evaluation contained 10 repeated trials to look at learning (6 subjects). Three different configurations were used 1) gamepad controller moving and aiming (no gaze) 2) gamepad controller moving and gaze aiming and 3) gamepad controller moving forward only, gaze aiming and steering of the movement.
Results:
However, twice as many shots were fired that missed in the gaze condition which can be described as a "machine gun" approach. Noteworthy is that no filtering was applied to the gaze position.
Howell have conducted a analysis of common tasks in gaming, below is a representation of the amount of actions in the Guild Wars game. The two bars indicate 1) novices and 2) experienced users.
Controlling all of these different actions requires switching of task mode. This is very challenging considering only on input modality (gaze) with no method of "clicking".
There are several ways a gaze interface can be constructed. From a bottom up approach. First the position of gaze can be used to emulate the mouse cursor (on a system level) Second, a transparent overlay can be placed on top of the application. Third, a specific gaze interface can be developed (which has been my own approach) This requires a modification of the original application which is not always possible.
The Snap/Clutch interaction method developed by Stephen Vickers who is working with Howell operates on the system level to emulate the mouse. This allows for specific gaze gestures to be interpretated which is used to switch mode. For example a quick glace to the left of the screen will activate a left mouse button click mode. When a eye fixation is detected in a specific region a left mouse click will be issued to that area.
When this is applied to games such as World of Warcraft (demo) specific regions of the screen can be used to issue movement actions towards that direction. The image below illustrates these regions overlaid on the screen. When a fixation is issued in the A region an action to move towards that direction is issued to the game it self.
After lunch we had a hands-on session with the Snap/Clutch interaction method where eight Tobii eye trackers were used for a round multiplayer of WoW! Very different from a traditional mouse/keyboard setup and takes some time to get used to.
- Istance, H.O.,Bates, R., Hyrskykari, A. and Vickers, S. Snap Clutch, a Moded Approach to Solving the Midas Touch Problem. Proceedings of the 2008 symposium on Eye Tracking Research & Applications; ETRA 2008. Savannah, GA. 26th-28th March 2008. Download
Bates, R., Istance, H.O., and Vickers, S. Gaze Interaction with Virtual On-Line Communities: Levelling the Playing Field for Disabled Users. Proceedings of the 4th Cambridge Workshop on Universal Access and Assistive Technology; CWUAAT 2008. University of Cambridge, 13th-16th April 2008. Download
The second part of the lecture concerned gaze interaction for mobile phones. This allows for ubiquitous computing where the eye tracker is integrated with a wearable display. As a new field it is surrounded with certain issues (stability, processing power, variation in lightning etc.) but all of which will be solved over time. The big question is what the "killer-application" will be. ( entertainment?) A researcher from Nokia attended the lecture and introduced a prototype system. Luckily I had the chance to visit their research department the following day to get a hands-on with their head mounted display with a integrated eye tracker (more on this in another post)
The third part was about stereoscopic displays which adds a third dimension (depth) to the traditional X and Y axis. There are several projects around the world working towards making this everyday reality. However, tracking the depth of gaze fixation is limited. The vergence (as seen by the distance between both pupils) eye movements are hard to measure when the distance to objects move above two meters.
Calculating convergence angles
d = 100 cm tan θ = 3.3 / 100; θ = 1.89 deg.
d = 200 cm tan θ = 3.3 / 200; θ = 0.96 deg.
Related papers on stereoscopic eye tracking:
- Essig, K., Pomplun, M. & Ritter, H. (2006). A neural network for 3D gaze recording with binocular eye trackers. International Journal of Parallel, Emergent, and Distributed Systems, 21 (2), 79-95.
- Y-M Kwon, K-W Jeon, J Ki, Q M. Shahab, S Jo and S-K Kim (2006). 3D Gaze Estimation and Interaction to Stereo Display The International Journal of Virtual Reality, 5(3):41-45
Tuesday, July 22, 2008
Eye gestures (Hemmert, 2007)
One example:
"Looking with one eye is a simple action. Seeing the screen with only one eye might therefore be used to switch the view to an alternate perspective on the screen contents: a filter for quick toggling. In this example, closing one eye filters out information on screen to a subset of the original data, such as an overview over the browser page or only the five most recently edited files. It was to see how the users would accept the functionality at the cost of having to close one eye, a not totally natural action." (Source)
Tuesday, April 15, 2008
Gaze Interaction Demo (Powerwall@Konstanz Uni.)
The demonstration can be view at a better quality (10Mb)
Also make sure to check out the 360 deg. Globorama display demonstration. It does not use eye tracking for interaction but a laser pointer. Nevertheless, really cool immersive experience, especially the Google Earth zoom in to 360 panoramas.
Thursday, March 27, 2008
RApid GAze-Based Interaction Techniques (RAGABITS)
Stephen Vickers at the Computer Human Interaction Research Group at the De Montfort University, Uk have developed interaction techniques that allows gaze based control of several popular online virtual worlds such as World of Warcraft or Second Life. This research will be presented at ETRA 2008, US under the title RAGABITS (RApid GAze-Based Interaction Techniques) and is espcially intented for users with severe motor impairments.
Selection method seems stable. None of the usual jitter can be seen. Nice!
Quote from http://www.ioct.dmu.ac.uk/projects/eyegaze.html
"Online virtual worlds and games (MMORPG's) have much to offer users with severe motor disabilities. It gives this user group the opportunity as entirely able-bodied to others in the virtual world. if they so wish. The extent to which a user has to reveal their disability becomes a privacy issue. Many of the avatars in Second Life appear as stylized versions of the users that control them and that stylization is the choice of the user. This choice is equally appropriate for disabled users. While the appearance of the user's avatar may not reveal the disability of the person that controls it, the behavior and speed or interaction in the world may do.
Many users with severe motor impairments may not be able to operate a keyboard or hand mouse and may also struggle with speech and head movement. Eye gaze is one method of interaction that has been used successfully in enabling access to desktop environments. However, simply emulating a mouse using eye gaze is not sufficient for interaction in online virtual worlds and the users privacy can be exposed unless efficient gaze-based interaction techniques, appropriate to activities in on-line worlds and games can be provided.
Monday, March 10, 2008
Inspiration: Dwell-Based Pointing in Applications (Muller-Tomfelde, 2007)
Abstract
"This paper describes exploratory studies and a formal experiment that investigate a particular temporal aspect of human pointing actions. Humans can express their intentions and refer to an external entity by pointing at distant objects with their fingers or a tool. The focus of this research is on the dwell time, the time span that people remain nearly motionless during pointing at objects. We address two questions: Is there a common or natural dwell time in human pointing actions? What implications does this have for Human Computer Interaction? Especially in virtual environments, feedback about the referred object is usually provided to the user to confirm actions such as object selection. A literature review and two studies led to a formal experiment in a hand-immersive virtual environment in search for an appropriate feedback delay time for dwell-based pointing actions. The results and implications for applications for Human Computer Interaction are discussed. "
I find the part about the visual feedback experiment interesting.
Questions asked:
- 1: Do you have the impression that the system feedback happened in a reasonable time according to your action? Answer: confirmation occurred too fast (1), too late (7).
- 2: Did you have the feeling to wait for the feedback to happen? Answer: no I didn’t have to wait (1), yes, I waited (7).
- 3: Did you have the impression that the time delay for the feedback was natural? (i.e., as in a real life communication situation) Answer: time delay is not natural (1), quite natural (7).