Sunday, May 18, 2008

Lund Eye-Tracking Academy (LETA) June11-13, 2008


SMI and Lund University Humanities Lab in Lund, Sweden, will realize the 1st Lund Eye Tracking Academy (LETA). Hosted by Kenneth Holmqvist and his team, the course will take place on June 11-13, 2008.

"We have decided to start our "eye-tracking academy" to help students, researchers and labs who start with eye-tracking to get up and running, to give them a flying start. Although eye-tracking research can be both fascinating and extremely useful, doing good eye-tracking research requires a certain minimum knowledge. Without having a basic understanding of what can be done with an eye-tracker, how to design an experiment and how to analyse the data, the whole study runs of risk of just producing a lot data which cannot really answer any questions.

Currently, to our knowledge, there exists no recurring and open course in eye-tracking in Europe outside Lund and everybody is trained on a person-to-person level in the existing larger research labs. In Lund, we already hold a 10 week, 7.5 ECTS Master-level course in eye-tracking. Now we offer another course, an intensive 3 day course, open to all interested in eye-tracking, who want to get a flying start and acquire some basic understanding of how to run a scientifically sound eye-tracking experiment and get high quality data that they know how to analyze.

This training course is open for all researchers and investigators just before or in the early phases of using eye-tracking, and for users wanting to refresh their knowledge of their system. It is open for attendees from universities as well as from industry. As part of the course, attendees will work hands-on with sample experiments, from experimental design to data analysis. Participants will train on state-of-the-art SMI eye-tracking systems, but the course is largely hardware independent and open to users of other systems."

Scheduled dates
11-13 June, 2008. Starting at 09:00 in the morning and ending the last day at around 16:00.

Course contents
• Pro and cons of headmounted, remote and contact eye-trackers.
• High sampling speed and detailed precision – who needs it?
• Gaze-overlaid videos vs datafiles – what can you do with them?
• How to set up and calibrate on a variety of subjects on different eye-trackers?
• Glasses, lenses, mascara, and drooping eye-lids – what to do?
• How to work with stimulus programs, and synchronize them with eye-tracking recording?
• How to deal with the consent forms and ethical issues?
• Short introduction to experimental design: Potentials and pitfalls.
• Visualisation of data vs number crunching.
• Fast data analysis of multi-user experiments.
• Fixation durations, saccadic amplitudes, transition diagrams, group similarity measures, and all the other measures – what do they tell us? What are the pitfalls?

Teaching methods
Lectures on selected topics (8h)
Hands-on work in our lab on prespecified experiments: Receiving and recording on a subject (9h). Handson initial data analysis (3h).

Eye-tracking systems available for this training
2*SMI HED 50 Hz with Polhemus Head-tracking
3*SMI HiSpeed 240/1250 Hz
SMI RED-X remote 50 Hz
2*SMI HED-mobile 50/200 Hz

Attendance fee
€750 incl course material, diploma and lunches if you register before June 5th.
We will only run the course if we get enough participants. Register online.

The course content is equivalent to 1 ECTS credit at Master's level or above, although we cannot currently provide official registration at Lund University for this credit.

SWAET 2008 videos online

The recorded videos from the presentations at the SWAET2008 conference in Lund a couple of weeks ago is now online.
Check out the complete list of presentations at the Lund University Humanities lab website.

Wednesday, May 14, 2008

Modify you webcam to capture infrared light

Found this a while back, might be useful if building a home-brew webcam eye tracker..
The most common method of eye tracking today is to use IR light to create corneal reflections. To do this you need a pair of IR LED´s (perhaps a couple of these) and a camera that is capable of receiving that spectrum of light. More or less all cameras can do this but manufacturers usually place a filter in front of the sensor so that it only picks up the perceivable spectrum of light. By removing this and replacing it with a filter that only passes IR you get a camera suitable for corneal reflection based eye tracking.



















Next step would probably be to locate the head and the eyes in the video sequence. A good starting point for this is the OpenCV library. Martin Wimmer has created the EyeFinder system which locates head and eyes in real-time. Upon locating the eyes an area is defined as a region of interest which is passed on to image processing algorithms. These will typically locates the pupil by ellipse fitting and the glints/reflections that the IR lights create and somewhere along those lines it becomes "easier said than done" =) (I have yet to try building my own =)

Tuesday, May 13, 2008

IUI Gaze interaction tutorial (2005)

Back in 2005 Kari-Jouko Räihä, Aulikki Hyrskykari, Päivi Majaranta at the Unit for Human-Computer Interaction of the University of Tampere held a tutorial session on gaze based interaction at the Intelligent User Interfaces (IUI) conference. Luckily they posted the info online with a nice reference list. A good starting point for gaze interaction, especially in combination with the COGAIN bibliography. I will follow this move and publish a list of all the papers I´ve been reading while writing my thesis (which is soon to be finalized btw)

Inspiration: Gaze-Based Interaction for Semi-Automatic Photo Cropping

From the Visualization Lab, a part of the Computer Science Division at UC Berkley, comes an application that records gaze position while viewing images and then uses the data to perform cropping in a more intelligent way.

Abstract
"We present an interactive method for cropping photographs given minimal information about the location of important content, provided by eye tracking. Cropping is formulated in a general optimization framework that facilitates adding new composition rules, as well as adapting the system to particular applications. Our system uses fixation data to identify important content and compute the best crop for any given aspect ratio or size, enabling applications such as automatic snapshot recomposition, adaptive documents, and thumbnailing. We validate our approach with studies in which users compare our crops to ones produced by hand and by a completely automatic approach. Experiments show that viewers prefer our gaze-based crops to uncropped images and fully automatic crops."

Original well-composed images (left), adapted to two different aspect ratios using our gaze-based approach. An ADL document (right) using our crops. If eye movements are collected passively during document construction, our approach allows adaptation of images to arbitrary aspect ratios with no explicit user effort.

Associated paper:
Also check out

Monday, May 12, 2008

SR Research: EyeLink Remote

"The EyeLink Remote system is designed for areas of eye tracking research where a head rest or head mount is not desirable but high accuracy and resolution are still important.

Head distance is accurately measured at 500 Hz using a small target sticker placed on the participants’ forehead. The use of a distance measure that is independent of the eye allows for head position to be tracked even during blinks, providing an extremely fast 2 msec blink recovery delay. The fast head position sampling rate also allows for head tracking during very quick head movements. (website)

Click image for a video demonstration

Rather impressive 500 Hz sampling rate which enables saccade detection algorithms and fast head movements. The recovery time from blinks is specified at only 2 milliseconds. However, the target sticker on the forehead is a workaround from actual face tracking. OK in the lab but not feasible in everyday, real world gaze interaction scenarios. (besides that my guess is that this tracker does not come cheap due to the high speed cameras used)

Gaze beats mouse: a case study.

Michael Dorr (publications) at the Institue of Neuro and Bioinformatics of University of Lübeck modified the open source version of LBreakout to be driven by gaze input using an SMI eye tracker. This was then used in a multiplayer setup where gaze/eye tracking took on the mouse. Source code and more information can be found here.

Click on images for video demonstration.


Related paper:

  • Michael Dorr, Martin Böhme, Thomas Martinetz, and Erhardt Barth. Gaze beats mouse: a case study. In The 3rd Conference on Communication by Gaze Interaction - COGAIN 2007, Leicester, UK, pages 16-19, 2007. [ bib .pdf ]

Sunday, May 11, 2008

Inspiration: Gaze controlled web browser

Craig Hennesy a Ph.D candidate at the Electrical and Computer Engineering dept. at University of British Columbia have done some work on gaze interaction. This includes the use of gaze position to scroll documents as the reader approaches the bottom of the window. This is used in other applications as well (really useful feature).

I´m currently working on my own implementation which will provide this functionality for a wide range of documents (web, pdf, word etc.) in combination with some new approaches on navigating the web using gaze alone.




"This video illustrates the use of eye-gaze tracking integrated with web browsing. The goal of this application is to reduce the use of the mouse when reading by removing the need to scroll up or down with the mouse. The simple scrolling application allows you to scroll down by looking below the article, scroll up by looking above the article, and go Back and Forward in the browser by looking to the left and right respectively.

In this demo the eye is tied to the mouse cursor so you can see where the user is looking, in the real application the motion of the eye stays behind the scenes and the mouse functions as a normal computer mouse."

Tuesday, May 6, 2008

Gaze interaction hits mainstream news

The New Scientist technology section posted an article on the Stephen Vickers work at De Montford University for the eye controlled version of World of Warcraft which I wrote about two months ago (see post)
Update: The New Scientist post caused rather extensive discussions on Slashdot, with more than
140 entries.

Great to see mainstream interest of gaze driven interaction. Gaming is truly one area where there is a huge potential, but it also depends on more accessible eye trackers. There is a movement for open source based eye tracking but the robustness for everyday usage is still remains at large. The system Stephen Vickers have developed is using the Tobii X120 eye tracker which is clearly out of range for all but the small group of users whom are granted financial support for their much needed assistive technology.

Have faith
In general, all new technology initially comes at a high cost due to intensive research and development but over time becomes accessible for the larger population. As an example, not many could imagine that satellite GPS navigation would be commonplace and really cheap a decade or two ago. Today mass-collaboration on the net is really happening making the rate of technology development exponential. Make sure to watch Google Techtalk Don Tapscott on Wikinomics.

Saturday, May 3, 2008

Interface evaluation procedure

The first evaluation of the prototype is now completed. The procedure was designed to test the individual components as well as the whole prototype (for playing music etc.). Raw data on selection times, error rates etc. was collected by custom development on the Microsoft .NET platform. Additional forms were displayed on-screen in between the steps of the procedure combined with standardized form based questionnaires to gather a rich set of data.

A more subjective approach was taken during the usage of the prototype as a whole to capture the aspects which could not be confined by automatic data collection or forms. Simply by observing participants and asking simple questions in normal spoken language. While perhaps being less scientifically valid this type of information is very valuable for understanding how the users think and react. Information that is crucial for improving the design for the next iteration. There is sometimes a difference the results gained by verbal utterance, questionnaires and measurable performance. For example, interfaces can be very efficient and fast but at the same time extremely demanding and stressful. Just measuring one performance factor would not tell the whole story.

This is why I chosen to use several methods to combine raw performance data, form based questionnaires and unconstrained verbal interviewing. Hoping that it can provide multiple aspects with potential for gathering a rich source of data.

For the evaluation I used a basic on-screen demographic form gathering age, sex, computer experience, conditions of vision etc. In-between the evaluation of the individual components I used the NASA Task Load Index as a quick reflection form and at the end of the session I handed out both a IBM Computer Usability Satisfaction Questionnaire and a Q.U.I.S Generic User Interface Questionnaire. The only modification I performed was to remove a few questions that would not apply to my prototype (why ask about help pages when the prototype contains none)

I´ve found the 230 Tips and Tricks for Better Usability Testing guide to be really useful and should be read by anyone conduction HCI evaluation.

Questionnaires used in my evaluation, in PDF format: