Saturday, February 23, 2008

Talk: Sensing user attention (R. Vertegaal)

Stumbled upon a talk by Roel Vertegaal at Google Techtalk describing various projects at the Queens University Human Media Lab, many of which are using eye tracking technology. In general, applies knowledge from cognitive science on attention and communication onto practical Human-Computer Interaction interfaces applications. Overall nice 40 minute talk. Enjoy.

Abstract
Over the past few years, our work has centered around the development of computing technologies that are sensitive to what is perhaps the most important contextual cue for interacting with humans that exists: the fabric of their attention. Our research group has studied how humans communicate attention to navigate complex scenarios, such as group decision making. In the process, we developed many different prototypes of user interfaces that sense the users' attention, so as to be respectful players that share this most important resource with others. One of the most immediate methods for sensing human attention is to detect what object the eyes look at. The eye contact sensors our company has developed for this purpose work at long range, with great head movement tolerance, and many eyes. They do not require any personal calibration or coordinate system to function. Today I will announce Xuuk's first product, EyeBox2, a viewing statistics sensor that works at up to 10 meters. EyeBox2 allows the deployment of algorithms similar to Google's PageRank in the real world, where anything can now be ranked according to the attention it receives. This allows us, for example, to track mass consumer interest in products or ambient product advertisements. I will also illustrate how EyeBox2 ties into our laboratory's research on interactive technologies, showing prototypes of attention sensitive telephones, attentive video blogging glasses, speech recognition appliances as well as the world's first attentive hearing aid.


Roel Vertegaal is the director of the Human Media Lab at the Queen's University in Kingston, Canada. Roel is the founder of Xuuk which offers the EyeBox2, a remote eye tracker that works on up to 10 meters distance (currently $1500) and associated analysis software.

Inspiration: EyeWindows (Fono et al, 2005)

Continuing on the zooming style of interaction that has become common within the field of gaze interaction is the "EyeWindows: Evalutaion of Eye-Controlled Zooming Windows for Focus Selection" (Fono&Vertegaal, 2005) Their paper describes two prototypes, one media browser with dynamic (elastic) allocation of screen real estate. The second prototype is used to dynamically size desktop windows upon gaze fixation. Overall, great examples presented in a clear, well structured paper. Interesting evaluation of selection techniques.



Abstract
In this paper, we present an attentive windowing technique that uses eye tracking, rather than manual pointing, for focus window selection. We evaluated the performance of 4 focus selection techniques: eye tracking with key activation, eye tracking with automatic activation, mouse and hotkeys in a typing task with many open windows. We also evaluated a zooming windowing technique designed specifically for eye-based control, comparing its performance to that of a standard tiled windowing environment. Results indicated that eye tracking with automatic activation was, on average, about twice as fast as mouse and hotkeys. Eye tracking with key activation was about 72% faster than manual conditions, and preferred by most participants. We believe eye input performed well because it allows manual input to be provided in parallel to focus selection tasks. Results also suggested that zooming windows outperform static tiled windows by about 30%. Furthermore, this performance gain scaled with the number of windows used. We conclude that eye-controlled zooming windows with key activation provides an efficient and effective alternative to current focus window selection techniques. Download paper (pdf).

David Fono, Roel Vertegaal and Conner Dickie are researchers at the Human Media Lab at the Queen's University in Kingston, Canada.

Friday, February 22, 2008

Inspiration: Fisheye Lens (Ashmore et al. 2005)

In the paper "Efficient Eye Pointing with a Fisheye Lens" (Ashmore et al., 2005) the usage of a fish eye magnification lens is slaved to the foveal region of the users gaze. This is another usage of the zooming style of interaction but compared to the ZoomNavigator (Skovsgaard, 2008) and the EyePointer (Kumar&Winograd, 2007) this is a continuous effect that will magnify what ever the users gaze lands upon. In other words, it is not meant to be a solution for dealing with the low accuracy of eye trackers in typical desktop (windows) interaction. Which makes is suitable for tasks of visual inspection for quality control, medical x-ray examination, satellite images etc. On the downside the nature of the lens distorts the image which breaks the original spatial relationship between items on the display (as demonstrated by the images below)

Abstract
"This paper evaluates refinements to existing eye pointing techniques involving a fisheye lens. We use a fisheye lens and a video-based eye tracker to locally magnify the display at the point of the user’s gaze. Our gaze-contingent fisheye facilitates eye pointing and selection of magnified (expanded) targets. Two novel interaction techniques are evaluated for managing the fisheye, both dependent on real-time analysis of the user’s eye movements. Unlike previous attempts at gaze-contingent fisheye control, our key innovation is to hide the fisheye during visual search, and morph the fisheye into view as soon as the user completes a saccadic eye movement and has begun fixating a target. This style of interaction allows the user to maintain an overview of the desktop during search while selectively zooming in on the foveal region of interest during selection. Comparison of these interaction styles with ones where the fisheye is continuously slaved to the user’s gaze (omnipresent) or is not used to affect target expansion (nonexistent) shows performance benefits in terms of speed and accuracy" Download paper (pdf)

The fish eye lens has been implemented commercially into the products of Idelix Software Inc. which has a set of demonstration available.

Wednesday, February 20, 2008

Inspiration: GUIDe Project (Kumar&Winograd, 2007)

In the previous post I introduced the ZoomNavigator (Skovsgaard, 2008) which is similar to the EyePointer system (Kumar&Winograd, 2007) developed within the GUIDe project (Gaze-Enhaced User Interface Design) an initiative by the department for Human Computer Interaction at Stanford University. This system relies on both an eye tracker and a keyboard which excludes users with disabilities (see video below). The aim of the GUIDe is to make the whole human-computer interaction "smarter" (as in intuitive, faster & less cumbersome) This differs from the COGAIN initiative which mainly aims at giving people with disabilities a higher quality of life.

Abstract
"The GUIDe (Gaze-enhanced User Interface Design) project in the HCI Group at Stanford University explores how gaze information can be effectively used as an augmented input in addition to keyboard and mouse. We present three practical applications of gaze as an augmented input for pointing and selection, application switching, and scrolling. Our gaze-based interaction techniques do not overload the visual channel and present a natural, universally-accessible and general purpose use of gaze information to facilitate interaction with everyday computing devices." Download paper (pdf)

Demonstration video
"The following video shows a quick 5 minute overview of our work on a practical solution for pointing and selection using gaze and keyboard. Please note, our objective is not to replace the mouse as you may have seen in several articles on the Web. Our objective is to provide an effective interaction technique that makes it possible for eye-gaze to be used as a viable alternative (like the trackpad, trackball, trackpoint or other pointing techniques) for everyday pointing and selection tasks, such as surfing the web, depending on the users' abilities, tasks and preferences."



The use of "focus points" is good design decisions as it provides the users with a fixation point which is much smaller than the actual target. This provides a clear and steady fixation which is easily discriminated by the eye tracker. The idea of displaying something that will "lure" the users fixation to remain still is something I intend to explore in my own project.

As mentioned the GUIDe project has developed several applications besides the EyePoint, such as the EyeExpose (application switching), gaze-based password entry, automatic text scrolling.
More information can be found in the GUIDe Publications

Make sure to get a copy of Manu Kumars Ph.D thesis "Gaze-enhanced User Interface Design" which is pleasure to read. Additionally, Manu have founded GazeWorks a company which aims at making the technology accessible for the general public at a lower cost.

Inspiration: ZoomNavigator (Skovsgaard, 2008)

Following up on the StartGazer text entry interface presented in my previous post, another approach to using zooming interfaces is employed in the ZoomNavigator (Skovsgaard, 2008) It addresses the well known issue of using gaze as input on traditional desktop systems, namely inaccuracy and jitter. Interesting solution which relies on dwell-time execution compared to the EyePoint system (Kumar&Winograd, 2007) which is described in the next post.

Abstract
The goal of this research is to estimate the maximum amount of noise of a pointing device that still makes interaction with a Windows interface possible. This work proposes zoom as an alternative activation method to the more well-known interaction methods (dwell and two-step-dwell activation). We present a magnifier called ZoomNavigator that uses the zoom principle to interact with an interface. Selection by zooming was tested with white noise in a range of 0 to 160 pixels in radius on an eye tracker and a standard mouse. The mouse was found to be more accurate than the eye tracker. The zoom principle applied allowed successful interaction with the smallest targets found in the Windows environment even with noise up to about 80 pixels in radius. The work suggests that the zoom interaction gives the user a possibility to make corrective movement during activation time eliminating the waiting time found in all types of dwell activations. Furthermore zooming can be a promising way to compensate for inaccuracies on low-resolution eye trackers or for instance if people have problems controlling the mouse due to hand tremors.


The sequence of images are screenshots from ZoomNavigator showing
a zoom towards a Windows file called ZoomNavigator.exe.

The principles of ZoomNavigator are shown in the figure above. Zooming is used to focus on the attended object and eventually make a selection (unambiguous action). ZoomNavigator allows actions similar to those found in a conventional mouse. (Skovsgaard, 2008) The system is described in a conference paper titled "Estimating acceptable noise-levels on gaze and mouse selection by zooming" Download paper (pdf)

Two-step zoom
The two-step zoom activation is demonstrated in the video below by IT University of Copenhagen (ITU) research director prof. John Paulin Hansen. Notice how the error rate is reduced by the zooming style of interaction, making it suitable for applications with need for detailed discrimination. It might be slower but error rates drops significantly.



"Dwell is the traditional way of making selections by gaze. In the video we compare dwell to magnification and zoom. While the hit-rate is 10 % with dwell on a 12 x 12 pixels target, it is 100 % for both magnification and zoom. Magnification is a two-step process though, while zoom only takes on selection. In the experiment, the initiation of a selection is done by pressing the spacebar. Normally, the gaze tracking system will do this automatically when the gaze remains within a limited area for more than approx. 100 ms"

For more information see the publications of the ITU.

Inspiration: StarGazer (Skovsgaard et al, 2008)

A major area of research for the COGAIN network is to enable communication for the disabled. The Innovative Communications group at IT University of Copenhagen continuously work on making gaze-based interaction technology more accessible, especially in the field of assistive technology.

The ability to enter text into the system is crucial for communication, without hands or speech this is somewhat problematic. The StartGazer software aims at solving this by introducing a novel 3D approach to text entry. In December I had the opportunity to visit ITU and try the StarGazer (among other things) myself, it is astonishingly easy to use. Within just a minute I was typing with my eyes. Rather than describing what it looks like, see the video below.
The associated paper is to be presented at the ETRA08 conference in March.



This introduces an important solution to the problem of eye tracker inaccuracy namely zooming interfaces. Fixating on a specific region of the screen will display an enlarged version of this area where objects can be earlier discriminated and selected.

The eyes are incredibly fast but from the perspective of eye trackers not really precise. This is due to the physiology properties of our visual system, in specific the foveal region of the eye. This retinal area produces the sharp detailed region of our visual field which in practice covers about the size of a thumbnail on an armslenght distance. To bring another area into focus a saccade will take place which moves the pupil, thus our gaze, this is what is registered by the eye tracker. Hence the discrimination of most eye trackers are in the 0.5-1 degree (in theory that is)

A feasible solution to deal with this limitation in accuracy is to use the display space dynamically and zoom into the areas of interest upon glancing. The zooming interaction style solves some of the issues with inaccuracy and jitter of the eye trackers but in addition it has to be carefully balanced so that it still provides a quick and responsive interface.

However, the to me the novelty in the StarGazer is the notion of traveling through a 3D space, the sensation of movement really catches ones attention and streamlines the interaction. Since text entry is really linear character by character, flying though space by navigating to character after character is a suitable interaction style. Since the interaction is nowhere near the speed of two hand keyboard entry the employment of linguistic probabilities algorithms such as those found in cellphones will be very beneficial (ie. type two or three letters and the most likely words will display in a list) Overall, I find the spatial arrangement of gaze interfaces to be a somewhat unexplored area. Our eyes are made to navigate in a three dimensional world while the traditional desktop interfaces mainly contains a flat 2D view. This is something I intend to investigate further.

Inspiration: COGAIN

Much of the developments seen in the field of gaze interaction stems from the assistive technology field where users whom are unable to use regular computer interfaces are provided tools to empower their everyday life in a wide range of activities such as communication, entertainment, home control etc. For example they can use the eye to type words and sentences which then are synthetically translated into spoken language by software, thus enabling communication beyond blinking. A major improvement in the quality of life.

"COGAIN (Communication by Gaze Interaction) integrates cutting-edge expertise on interface technologies for the benefit of users with disabilities. COGAIN belongs to the eInclusion strategic objective of IST. COGAIN focuses on improving the quality of life for those whose life is impaired by motor-control disorders, such as ALS or CP. COGAIN assistive technologies will empower the target group to communicate by using the capabilities they have and by offering compensation for capabilities that are deteriorating. The users will be able to use applications that help them to be in control of the environment, or achieve a completely new level of convenience and speed in gaze-based communication. Using the technology developed in the network, text can be created quickly by eye typing, and it can be rendered with the user's own voice. In addition to this, the network will provide entertainment applications for making the life of the users more enjoyable and more equal. COGAIN believes that assistive technologies serve best by providing applications that are both empowering and fun to use."

A short introduction by Dr Richard Bates, a research fellow at the School of Computing Sciences at the De Montfort University in Leicester, can be downloaded either as presentation slides or paper.

The COGAIN network is a rich source of information on gaze interaction. A set of tools developed within the network has been made publicly software available for download. Make sure to check out the video demonstations of various gaze interaction tools.

Participating organizations within the COGAIN network.

Tuesday, February 19, 2008

Inspiration: GazeSpace (Laqua et al. 2007)

Parallel to working on the prototypes I continuously search and review papers and thesises on gaze interaction methods / techniques, hardware and software development etc. I will post references on some of these to this blog. A great deal of research and theories on interaction / cognition lies behind the field of gaze interaction.

The paper below was presented last year on a conference held by the British Computer Society specialist group on Human Computer Interaction. Catching my attention is the focus on providing a custom content spaces (canvas), good feedback and using a dynamic dwell-time, something I intend to incorporate into my own gaze GUI components. Additionally, the idea on expanding the content canvas upon a gaze fixation is really nice and something I will attempt to do in .Net/WPF (initial work displays a set of photos that becomes enlarged upon fixation)

GazeSpace Eye Gaze Controlled Content Spaces (Laqua et al. 2007)

Abstract
In this paper, we introduce GazeSpace, a novel system utilizing eye gaze to browse content spaces. While most existing eye gaze systems are designed for medical contexts, GazeSpace is aimed at able-bodied audiences. As this target group has much higher expectations for quality of interaction and general usability, GazeSpace integrates a contextual user interface, and rich continuous feedback to the user. To cope with real-world information tasks, GazeSpace incorporates novel algorithms using a more dynamic gaze-interest threshold instead of static dwell-times. We have conducted an experiment to evaluate user satisfaction and results show that GazeSpace is easy to use and a “fun experience”. Download paper (PDF)





About the author
Sven Laqua is a PhD Student & Teaching Fellow at the Human Centred Systems Group a part of the Dept. of Computer Science at University College London. Sven has a personal homepage, university profile and a blog (rather empty at the moment)

Monday, February 11, 2008

GazeMemory v0.1a on its way

The extra time spent on developing the Custom Controls for Windows Presentation Foundation (WPF) paid off. What before that took days to develop can now be build within hours. Today I put together a gaze version of the classic game Memory which is controlled by dwell-time (prologed fixation) The "table" contains 36 cards, i.e 18 unique options. By fixating on one card a smooth animation will make the globe on the front of the card to light up and after fixating long enough (500ms) if will show the symbol (flags in first version) After selecting the fist card another is fixated and the two will be compared. If they contain the same symbols then remove them from the table. If not, turn them back over again. The interface provides several feedback mechanisms. Upon glancing the border around the buttons begins to shine, when fixating long enough the dwell time function is activated and illustrated by a white glow that smoothly fades up surrounding the globe.

The Custom Control that will be named GazeButton is to be further developed to support more features such as a configurable dwell-time, feedback such as animations, colors etc. The time spent will be returned tenfold when later on. I plan to release these components as open source as soon as they reach better and more stable performance (ie. production quality with documentation)

Lessons learned so far involves Dependecy properties which is very important if you'd like to develop custom controls in WPF. Animation control and triggers and getting more into DataBinding which looks very promising so far.

Links:
Recommended guidelines for WPF custom controls
Three ways to build an image button
Karl on WPF


Screenshot of the second prototype of the my GazeMemory game

Thursday, February 7, 2008

Midas touch, Dwell time & WPF Custom controls

How do you make a distinction between users glancing at objects and fixations with the intention to start a function? This is the well known problem usually referred to as the "Midas touch problem". The interfaces that rely on gaze as the only means input must support this style of interaction and be capable of making distinction between glances when the user is just looking around and fixations with the intention to start a function.

There are some solutions to this. Frequently occurring is the concept of "dwell-time" where you can activate functions simply by a prolonged fixation of an icon/button/image. Usually in the range of 4-500ms or so. This is a common solution when developing gaze interfaces for assistive technology for users suffering from Amyotrophic lateral sclerosis (ALS) or other "paralyzing" conditions where no other modality the gaze input can be used. It does come with some issues, the prolonged fixation means that the interaction is slower since the user has to sit through the dwell-time but mainly it adds stress to the interaction since everywhere you look seems to activate a function.

As part of my intention to develop a set of components for the interface a dwell-based interaction style should be supported. It may not be the best method but I do wish to have it in the drawer just to have the opportunity to evaluate it and perform experiments with it.

The solution I´ve started working on goes something like this; upon a fixation on the object an event is fired. The event launches a new thread which aims at determining if the fixation is within the area long enough for it to be considered to be a dwell (the gaze data is noisy) Half way through it measures if the area have received enough fixations to continue, otherwise aborts the thread. At the end it measures if fixations have resided within the area for more than 70% of the time, in that case, it activates the function.

Working with threads can be somewhat tricky and a some time for tweaking remains. In addition getting the interaction to feel right and suitable feedback is important. I'm investigating means of a) providing feedback on which object is selected b) indications that the dwell process has started and its state. c) animations to help the fixation to remain in the center of the object.

--------------------------------------------------------------
Coding> Windows Presentation Foundation and Custom Controls

Other progress has been made in learning Windows Presentation Foundation (WPF) user interface development. The Microsoft Expression Blend is a tool that enables a graphical design of components. By creating generic objects such as gaze-buttons the overall process will benefit in the longer run. Instead of having all of the objects defined in a single file it is possible to break it into separate projects and later just include the component DLL file and use one line of code to include it in the design.

Additional styling and modification on the objects size can then be performed as if it were a default button. Furthermore, by creating dependency properties in the C# code behind each control/component specific functionality can be easily accessed from the main XAML design layout. It does take a bit longer to develop but will be worth it tenfold later on. More on this further on.

Microsoft Expression Blend. Good companion for WPF and XAML development.

Windows Presentation Foundation (WPF) has proven to be more flexible and useful than it seemed at fist glance. You can really do complex things swiftly with it. The following links have proven to be great resources for learning more about WPF.

Lee Brimelow, experienced developer who took part in the new Yahoo Messenger client. Excellent screencasts that takes you through the development process.
http://www.contentpresenter.com and http://www.thewpfblog.com

>Kirupa Chinnathambi, introduction to WPF, Blend and a nice blog.http://www.kirupa.com/blend_wpf/index.htm and http://blog.kirupa.com/

Microsoft MIX07, 72 hours of talks about the latest tools and tricks.http://sessions.visitmix.com/