Showing posts with label navigation. Show all posts
Showing posts with label navigation. Show all posts

Wednesday, April 20, 2011

Fraunhofer CMOS-OLED Headmounted display with integrated eye tracker

"The Fraunhofer IPMS works on the integration of sensors and microdisplays on CMOS backplane for several years now. For example the researchers have developed a bidirectional microdisplay, which could be used in Head-Mounted Displays (HMD) for gaze triggered augmented-reality (AR) aplications. The chips contain both an active OLED matrix and therein integrated photodetectors. The combination of both matrixes in one chip is an essential possibility for system integrators to design smaller, lightweight and portable systems with both functionalities." (Press release)
"Rigo Herold, PhD student at Fraunhofer IPMS and participant of the development team, declares: This unique device enables the design of a new generation of small AR-HMDs with advanced functionality. The OLED microdisplay based Eyetracking HMD enables the user on the one hand to overlay the view of the real world with virtual contents, for example to watch videos at jog. And on the other hand the user can select the next video triggered only by his gaze without using his hands." (Press release)

Sensor integrates both OLED display and CMOS imaging sensor. 

Rigo Herold will present the system at the SID 2011 exhibitor forum at May 17, 2011 4:00 p.m.: Eyecatcher: The Bi-Directional OLED Microdisplay with the following specs:
  • Monochrome 
  • Special Eyetracking-Algorithm for HMDs based on bidirectional microdisplays
  • Front brightness: > 1500 cd/m²

Poster was presented at ISSCC 2011 : Industry Demonstration Session (IDS). Click to enlarge

In addition there is a paper titled "Bidirectional OLED microdisplay: Combining display and image sensor functionality into a monolithic CMOS chip" published with the following abstract:. 

"Microdisplays based on organic light-emitting diodes (OLEDs) achieve high optical performance with excellent contrast ratio and large dynamic range at low power consumption. The direct light emission from the OLED enables small devices without additional backlight, making them suitable for mobile near-to-eye (NTE) applications such as viewfinders or head-mounted displays (HMD). In these applications the microdisplay acts typically as a purely unidirectional output device [1–3]. With the integration of an additional image sensor, the functionality of the microdisplay can be extended to a bidirectional optical input/output device. The major aim is the implementation of eye-tracking capabilities in see-through HMD applications to achieve gaze-based human-display-interaction." Available at IEEE Xplore

Monday, April 18, 2011

AutomotiveUI'11 - 3rd International Conference On Automotive User Interfaces and Interactive Vehicular Applications

"In-car interactive technology is becoming ubiquitous and cars are increasingly connected to the outside world. Drivers and passengers use this technology because it provides valuable services. Some technology, such as collision warning systems, assists drivers in performing their primary in-vehicle task (driving). Other technology provides information on myriad subjects or offers entertainment to the driver and passengers.

The challenge that arises from the proliferation of in-car devices is that they may distract drivers from the primary task of driving, with possibly disastrous results. Thus, one of the major goals of this conference is to explore ways in which in-car user interfaces can be designed so as to lessen driver distraction while still enabling valuable services. This is challenging, especially given that the design of in-car devices, which was historically the responsibility of car manufacturers and their parts suppliers, is now a responsibility shared among a large and ever-changing group of parties. These parties include car OEMs, Tier 1 and Tier 2 suppliers of factory-installed electronics, as well as the manufacturers of hardware and software that is brought into the car, for example on personal navigation devices, smartphones, and tablets.

As we consider driving safety, our focus in designing in-car user interfaces should not be purely on eliminating distractions. In-car user interfaces also offer the opportunity to improve the driver¹s performance, for example by increasing her awareness of upcoming hazards. They can also enhance the experience of all kinds of passengers in the car. To this end, a further goal of AutomotiveUI 2011 is the exploration of in-car interfaces that address the varying needs of different types of users (including disabled drivers, elderly drivers or passengers, and the users of rear-seat entertainment systems). Overall our goal is to advance the state of the art in vehicular user experiences, in order to make cars both safer and more enjoyable places to spend time." http://www.auto-ui.org



Topics include, but are not limited to:
* new concepts for in-car user interfaces
* multimodal in-car user interfaces
* in-car speech and audio user interfaces
* text input and output while driving
* multimedia interfaces for in-car entertainment
* evaluation and benchmarking of in-car user interfaces
* assistive technology in the vehicular context
* methods and tools for automotive user interface research
* development methods and tools for automotive user interfaces
* automotive user interface frameworks and toolkits
* detecting and estimating user intentions
* detecting/measuring driver distraction and estimating cognitive load
* biometrics and physiological sensors as a user interface component
* sensors and context for interactive experiences in the car
* user interfaces for information access (search, browsing, etc.) while driving
* user interfaces for navigation or route guidance
* applications and user interfaces for inter-vehicle communication
* in-car gaming and entertainment
* different user groups and user group characteristics
* in-situ studies of automotive user interface approaches
* general automotive user experience research
* driving safety research using real vehicles and simulators
* subliminal techniques for workload reduction



SUBMISSIONS
AutomotiveUI 2011 invites submissions in the following categories:

* Papers (Submission Deadline: July 11th, 2011)
* Workshops (Submission Deadline: July 25th, 2011)
* Posters & Interactive Demos (Submission Deadline: Oct. 10th, 2011)
* Industrial Showcase (Submission Deadline:  Oct. 10th, 2011)

For more information on the submission categories please check http://www.auto-ui.org/11/submit.php

Sunday, March 6, 2011

Monday, November 8, 2010

GazeCom and SMI demonstrates automotive guidance system



"In order to determine the effectiveness of gaze guidance, within the project, SMI developed an experimental driving simulator with integrated eye tracking technology.  A driving safety study in a city was set up and testing in that environment has shown that the number of accidents was significantly lower with gaze guidance than without, while most of the drivers didn’t consciously notice the guiding visual cues."
Christian Villwock, Director for Eye and Gaze Tracking Systems at SMI: “We have shown that visual performance can significantly be improved by gaze contingent gaze guidance. This introduces huge potential in applications where expert knowledge has to be transferred or safety is critical, for example for radiological image analysis.” 
Within the GazeCom project, funded by the EU within the Future and Emerging Technologies (FET) program, the impact of gaze guidance on what is perceived and communicated effectively has been determined in a broad range of tasks of varying complexity. This included basic research in the understanding of visual perception and brain function up to the level where the guidance of gaze becomes feasible." (source)

Tuesday, August 10, 2010

Eye control for PTZ cameras in video surveillance

Bartosz Kunka, a PhD student at the Gdańsk University of Technology have employed a remote gaze-tracking system called Cyber-Eye to control PTZ cameras in video surveillance and video-conference systems. The movie prepared for system presentation on Research Challange at SIGGRAPH 2010 in Los Angeles.

Monday, April 26, 2010

Freie Universität Berlin presents gaze controlled car

From the Freie Universität in Berlin comes a working prototype for a systems that allows direct steering by eye movements alone. The prototype was demonstrated in front of a large group journalist at the former Berlin Tempelhof Airport. Gaze data from a head-mounted SMI eye tracker is feed into the control system of the Spirit of Berlin, a platform for autonomous navigation. Similar to the gaze controlled robot we presented at CHI09 the platform offers a coupling between the turning of the wheels and the gaze data coordinate space (eg. look left and car drives left). Essentially its a mapping onto a 2D plane where deviations from the center issues steering commands and the degree of turning is modulated by the distance. Potentially interesting when coupled with other sensors that in combination offers offer driver support, for example if an object in the vehicles path that driver has not seen. Not to mention scenarios including individuals with disabilities and/or machine learning. The work has been carried out under guidance by professor Raúl Rojas as part AutoNOMOS project which has been running since 2006 after inspiration from the Stanford autonomos car project.

More info in the press-release.

Thursday, March 18, 2010

GM Automotive heads-up display

General Motors today presents a new automotive heads up display system that has been developed in conjunction with Carnegie Mellon and the University of Southern California. It employs a number of sensors that coupled with object and pattern recognition could assist the driver by projecting information directly onto the windshield. For example the system could assist in navigation by highlighting road signs and emphasis the lanes/edges of the road in difficult driving conditions (rain, snow, fog). Inside the car the system uses an eye tracking solution provided by Swedish firm Smart Eye. Their Smart Eye Pro 5.4 employs several cameras (three in the demonstration, max 6) and infrared illumination to provide 6 degrees of freedom head tracking and 2D eye tracking, both with a (reported) 0.5 degree accuracy. The firm reports that the system provides "immunity to difficult light conditions, including darkness and rapidly varying sunlight" however to what extent this is true for direct facial sunlight remains to be seen. However, over time technical issues are to be overcome, its exciting to see that eye tracking is considered for everyday applications (in a not so distant future). They are not the only ones working on this right now.




Sources:
GM Media "GM Reimagines Head-Up Display Technology"
Engadget.com "GM shows off sensor-laden windshield, new heads-up display prototype"
TG Daily "GM develops HUD system for vehicle windshields"

Wednesday, October 21, 2009

Nokia near-eye display gaze interaction update

The Nokia near-eye gaze interaction platform that I tried in Finland last year has been further improved. The cap used to support the weight has been replaced with a sturdy frame and the overall prototype seems lighter and also incorporates headphones. The new gaze based navigation interface support photo browsing based on the Image Space application, allowing location based accesses to user generated content. See the concept video at the bottom for their futuristic concept. Nokia research website. The prototype will be displayed at the International Symposium on Mixed and Augmented Reality conference in Orlando, October 19-22.






Monday, September 14, 2009

GaZIR: Gaze-based Zooming Interface for Image Retrieval (Kozma L., Klami A., Kaski S., 2009)

From the Helsinki Institute for Information Technology, Finland, comes a research prototype called GaZIR for gaze based image retrieval built by Laszlo Kozma, Arto Klami and Samuel Kaski. The GaZIR prototype uses a light-weight logistic regression model as a mechanism for predicting relevance based on eye movement data (such as viewing time, revisit counts, fixation length etc.) All occurring on-line in real time. The system is build around the PicSOM (paper) retrieval engine which is based on tree structured self-organizing maps (TS-SOMs). When provided a set of reference images the PicSOM engine goes online to download a set of similar images (based on color, texture or shape)

Abstract
"We introduce GaZIR, a gaze-based interface for browsing and searching for images. The system computes on-line predictions of relevance of images based on implicit feedback, and when the user zooms in, the images predicted to be the most relevant are brought out. The key novelty is that the relevance feedback is inferred from implicit cues obtained in real-time from the gaze pattern, using an estimator learned during a separate training phase. The natural zooming interface can be connected to any content-based information retrieval engine operating on user feedback. We show with experiments on one engine that there is sufficient amount of information in the gaze patterns to make the estimated relevance feedback a viable choice to complement or even replace explicit feedback by pointing-and-clicking."


Fig1. "Screenshot of the GaZIR interface. Relevance feedback gathered from outer rings influences the images retrieved for the inner rings, and the user can zoom in to reveal more rings."

Fig2. "Precision-recall and ROC curves for userindependent relevance prediction model. The predictions (solid line) are clearly above the baseline of random ranking (dash-dotted line), showing that relevance of images can be predicted from eye movements. The retrieval accuracy is also above the baseline provided by a naive model making a binary relevance judgement based on whether the image was viewed or not (dashed line), demonstrating the gain from more advanced gaze modeling."

Fig 3. "Retrieval performance in real user experiments. The bars indicate the proportion of relevant images shown during the search in six different search tasks for three different feedback methods. Explicit denotes the standard point-and-click feedback, predicted means implicit feedback inferred from gaze, and random is the baseline of providing random feedback. In all cases both actual feedback types outperform the baseline, but the relative performance of explicit and implicit feedback depends on the search task."
  • László Kozma, Arto Klami, and Samuel Kaski: GaZIR: Gaze-based Zooming Interface for Image Retrieval. To appear in Proceedings of 11th Conference on Multimodal Interfaces and The Sixth Workshop on Machine Learning for Multimodal Interaction (ICMI-MLMI), Boston, MA, USA, Novermber 2-6, 2009. (abstract, pdf)

Wednesday, July 22, 2009

Telegaze update

Remember the TeleGaze robot developed by Hemin Omer which I wrote about last September? Today there is a new video available showing an updated interface which appears to be somewhat improved, no further information is available.
Update: The new version includes an automatic "person-following" mode which can be turned on or off through the interface. See video below

Tuesday, May 26, 2009

Toshiba eye tracking for automotive applications

Seen this one coming for a while. Wonder how stable it would be in a real-life scenario..
Via Donald Melanson at Engadget:
"We've seen plenty of systems that rely on facial recognition for an interface, but they've so far been a decidedly rarer occurrence when it comes to in-car systems. Toshiba looks set to change that, however, with it now showing off a new system that'll not only let you control the A/C or radio with the glance of your eye, but alert you if you happen to take your eyes off the road for too long. That's done with the aid of a camera mounted above the steering wheel that's used to identify and map out the driver's face, letting the car (or desktop PC in this demonstration) detect everything from head movement and eye direction to eyelid blinks, which Toshiba says could eventually be used to alert drowsy drivers. Unfortunately, Toshiba doesn't have any immediate plans to commercialize the technology, although it apparently busily working to make it more suited for embedded CPUs." (source)

Tuesday, May 12, 2009

BBC News: The future of gadget interaction

Dan Simmons at BBC reports on future technologies from the Science Beyond Fiction 2009 conference in Prague. The news headline includes a section on the GazeCom project who won the 2nd prize for their exhibit "Gaze-contingent displays and interaction". Their website hosts additional demonstrations.

"Gaze tracking is well-established and has been used before now by online advertisers who use it to decide the best place to put an advert. A novel use of the system tracks someone's gaze and brings into focus the area of a video being watched by blurring their peripheral vision.In the future, the whole image could also be panned left or right as the gaze approaches the edge of the screen. Film producers are interested in using the system to direct viewers to particular parts within a movie. However, interacting with software through simply looking will require accurate but unobtrusive eye tracking systems that, so far, remain on the drawing board... The European Commission (EC) is planning to put more cash into such projects. In April it said it would increase its investment in this field from 100m to 170m euros (£89m-£152m) by 2013. " (BBC source ) More information about the EC CORDIS : ICT program.

External link. The BBC reported Dan Simmons tests a system designed to use a driver's peripheral vision to flag up potential dangers on the road. It was recorded at the Science Beyond Fiction conference in Prague.

The GazeCom project involves the following partners:

Tuesday, May 5, 2009

Gaze-Augmented Manual Interaction (Bieg, H.J, 2009)

Hans-Joachim Bieg with the HCI Group at the University of Konstanz have investigated gaze augmented interaction on very large display areas. The prototype is running on the 221" Powerwall using a head mounted setup and allows users to select and zoom into an item of interest based on gaze position. An earlier video demonstration of setup can be found here.

"This project will demonstrate a new approach to employing users’ gaze in the context of human-computer interaction. This new approach uses gaze passively in order to improve the speed and precision of manually controlled pointing techniques. Designing such gaze augmented manual techniques requires an understanding of the principles that govern the coordination of hand and eye. This coordination is influenced by situational parameters (task complexity, input device used, etc.), which this project will explore in controlled experiments."

Gaze agumented interaction on the 221" PowerWall
  • Bieg, H. 2009. Gaze-augmented manual interaction. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 - 09, 2009). CHI EA '09. ACM, New York, NY, 3121-3124. DOI= http://doi.acm.org/10.1145/1520340.1520442

Sunday, May 3, 2009

Laval VRchive @ Tokyo Metropolitan University

Hidenori Watanave at the Tokyo Metropolitan University have released a brief video demonstrating gaze interaction for the Laval VRchive. The VRchive is a virtual reality environment for navigating media content. The setup is using a standalone Tobii 1750 tracker and a projector. The simple interface allows gaze control through looking at either the top, bottom, left or right areas of the display area as well as winking to perform clicks. Althrough an early version the initial experiments were successful, but the software is unstable and needs further improvements.


Friday, May 1, 2009

Gaze Controlled Driving

This is the paper on using eye trackers for remote robot navigation I had accepted for the CHI09 conference. It has now appeared on the ACM website. Note that the webcam tracker referred to in the paper is the ITU Gaze Tracker in an earlier incarnation. The main issue while using it is that head movements affect the gaze position and creates an offset. This is easier to correct and counterbalance on a static background than moving image (while driving!)

Abstract
"We investigate if the gaze (point of regard) can control a remote vehicle driving on a racing track. Five different input devices (on-screen buttons, mouse-pointing low-cost webcam eye tracker and two commercial eye tracking systems) provide heading and speed control on the scene view transmitted from the moving robot. Gaze control was found to be similar to mouse control. This suggests that robots and wheelchairs may be controlled ―hands-free‖ through gaze. Low precision gaze tracking and image transmission delays had noticeable effect on performance."

  • Tall, M., Alapetite, A., San Agustin, J., Skovsgaard, H. H., Hansen, J. P., Hansen, D. W., and Møllenbach, E. 2009. Gaze-controlled driving. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 - 09, 2009). CHI EA '09. ACM, New York, NY, 4387-4392. DOI= http://doi.acm.org/10.1145/1520340.1520671

Friday, November 21, 2008

Eye movement control of remote robot

Yesterday we demonstrated our gaze navigated robot at the Microsoft Robotics event here at ITU Copenhagen. The "robot" transmits a video which is displayed on a client computer. By using an eye tracker we can direct the robot towards where the user is looking. The concept allows for a human-machine interaction with a direct mapping of the users intention. The Danish National TV (DR) came by today and recorded a demonstration. It will be shown tonight at the nine o´ clock news. Below is a video that John Paulin Hansen recorded yesterday which demonstrates the system. Please notice that the frame-rate of the video stream was well below average at the time of recording. It worked better today. In the coming week we'll look into alternative solutions (suggestions appreciated) The projects has been carried out in collaboration with Alexandre Alapetite from DTU. His low-cost, LEGO-based rapid mobile robot prototype, gives interesting possibilities to test some human-computer and human-robot interaction.



The virgin tour around the ITU office corridor (on YouTube)



Available on YouTube

Tuesday, November 11, 2008

Gaze vs. Mouse in Games: The Effects on User Experience (Gowases T, Bednarik R, Tukiainen M)

Tersia Gowases, Roman Bednarik (blog) and Markku Tukiainen at the Department of Computer Science and Statistics, University of Joensuu, Finland got a paper published in the proceedings for the 16th International Conference on Computers in Education (ICCE).

"We did a simple questionnaire-based analysis. The results of the analysis show some promises for implementing gaze-augmented problem-solving interfaces. Users of gaze-augmented interaction felt more immersed than the users of other two modes - dwell-time based and computer mouse. Immersion, engagement, and user-experience in general are important aspects in educational interfaces; learners engage in completing the tasks and, for example, when facing a difficult task they do not give up that easily. We also did analysis of the strategies, and we will report on those soon. We could not attend the conference, but didn’t want to disappoint eventual audience. We thus decided to send a video instead of us. " (from Romans blog)




Abstract
"The possibilities of eye-tracking technologies in educational gaming are seemingly endless. The question we need to ask is what the effects of gaze-based interaction on user experience, strategy during learning and problem solving are. In this paper we evaluate the effects of two gaze based input techniques and mouse based interaction on user experience and immersion. In a between-subject study we found that although mouse interaction is the easiest and most natural way to interact during problemsolving, gaze-based interaction brings more subjective immersion. The findings provide a support for gaze interaction methods into computer-based educational environments." Download paper as PDF.


Some of this research has also been presented within the COGAIN association, see:
  • Gowases Tersia (2007) Gaze vs. Mouse: An evaluation of user experience and planning in problem solving games. Master’s thesis May 2, 2007. Department of Computer Science, University of Joensuu, Finland. Download as PDF

Monday, November 3, 2008

The Conductor Interaction Method (Rachovides et al)

Interesting concept combining gaze input with hand gestures by Dorothy Rachovides at the Digital World Research Centre together with James Walkerdine and Peter Phillips at the Computing Department Lancaster University.

"This article proposes an alternative interaction method, the conductor interaction method (CIM), which aims to provide a more natural and easier-to-learn interaction technique. This novel interaction method extends existing HCI methods by drawing upon techniques found in human-human interaction. It is argued that the use of a two-phased multimodal interaction mechanism, using gaze for selection and gesture for manipulation, incorporated within a metaphor-based environment, can provide a viable alternative for interacting with a computer (especially for novice users). Both the model and an implementation of the CIM within a system are presented in this article. This system formed the basis of a number of user studies that have been performed to assess the effectiveness of the CIM, the findings of which are discussed in this work.


More specifically the CIM aims to provide the following.

—A More Natural Interface. The CIM will have an interface that utilizes gaze and gestures, but is nevertheless capable of supporting sophisticated activities. The CIM provides an interaction technique that is as natural as possible and is close to the human-human interaction methods with which users are already familiar. The combination of gaze and gestures allows the user to perform not only simple interactions with a computer, but also more complex interacones such as the selecting, editing, and placing of media objects.



—A Metaphor Supported Interface. In order to help the user understand and exploit the gaze and gesture interface, two metaphors have been developed. An orchestra metaphor is used to provide the environment in which the user interacts. A conductor metaphor is used for interacting within this environment. These two metaphors are discussed next.

—A Two-Phased Interaction Method. The CIM uses an interaction process where each modality is specific and has a particular function. The interaction between user and interface can be seen as a dialog that is comprised of two phases. In the first phase, the user selects the on-screen object by gazing at it. In the second phase, with the gesture interface the user is able to manipulate the selected object. These distinct functions of gaze and gesture aim to increase system usability, as they are based on human-human interaction techniques, and also help to overcome issues such as the Midas Touch problem that often experienced by look-and-dwell systems. As the dialog combines two modalities in sequence, the gaze interface can be disabled after the first phase. This minimizes the possibility of accidentally selecting objects through the gaze interface. The Midas Touch problem can also be further addressed by ensuring that there is ample dead space between media objects.

—Significantly Reduced Learning Overhead. The CIM aims to reduce the overhead of learning to use the system by encouraging the use of gestures that users can easily associate with activities they perform in their everyday life. This transfer of experience can lead to a smaller learning overhead [Borchers 1997], allowing users to make the most of the system’s features in a shorter time.

Thursday, September 18, 2008

The Inspection of Very Large Images by Eye-gaze Control

Nicholas Adams, Mark Witkowski and Robert Spence from the Department of Electrical and Electronic Engineering at the Imperial College London got the HCI 08 Award for International Excellence for work related to gaze interaction.

"The researchers presented novel methods for navigating and inspecting extremely large images solely or primarily using eye gaze control. The need to inspect large images occurs in, for example, mapping, medicine, astronomy and surveillance, and this project considered the inspection of very large aerial images, held in Google Earth. Comparative search and navigation tasks suggest that, while gaze methods are effective for image navigation, they lag behind more conventional methods, so interaction designers might consider combining these techniques for greatest effect." (BCS Interaction)

Abstract

The increasing availability and accuracy of eye gaze detection equipment has encouraged its use for both investigation and control. In this paper we present novel methods for navigating and inspecting extremely large images solely or primarily using eye gaze control. We investigate the relative advantages and comparative properties of four related methods: Stare-to-Zoom (STZ), in which control of the image position and resolution level is determined solely by the user's gaze position on the screen; Head-to-Zoom (HTZ) and Dual-to-Zoom (DTZ), in which gaze control is augmented by head or mouse actions; and Mouse-to-Zoom (MTZ), using conventional mouse input as an experimental control.

The need to inspect large images occurs in many disciplines, such as mapping, medicine, astronomy and surveillance. Here we consider the inspection of very large aerial images, of which Google Earth is both an example and the one employed in our study. We perform comparative search and navigation tasks with each of the methods described, and record user opinions using the Swedish User-Viewer Presence Questionnaire. We conclude that, while gaze methods are effective for image navigation, they, as yet, lag behind more conventional methods and interaction designers may well consider combining these techniques for greatest effect.

This paper is the short version of Nicolas Adams Masters thesis which I stumbled upon before creating this blog. A early version appeared as a short paper for COGAIN06.

Monday, September 15, 2008

TeleGaze (Hemin, 2008)

"This research investigates the use of eye-gaze tracking in controlling the navigation of mobile robots remotely through a purpose built interface that is called TeleGaze. Controlling mobile robots from a remote location requires the user to continuously monitor the status of the robot through some sort of feedback system. Assuming that a vision-based feedback system is used such as video cameras mounted onboard the robot; this requires the eyes of the user to be engaged in the monitoring process throughout the whole duration of the operation. Meanwhile, the hands of the user need to be engaged, either partially or fully, in the driving task using any input devices. Therefore, the aim of this research is to build a vision based interface that enables the user to monitor as well as control the navigation of the robot using only his/her eyes as inputs to the system since the eyes are engaged in performing some tasks anyway. This will free the hands of the user for other tasks while controlling the navigation is done through the TeleGaze interface. "




TeleGaze experimental platform consists of a mobile robot, an eye gaze tracking equipment and a teleoperation station that the user interacts with. The TeleGaze interface runs on the teleoperation station PC and interprets inputs from the eyes into controlling commands. Meanwhile, presenting the user with the images that come back from the vision system mounted on the robotic platform.


More information at Hemin Sh. Omers website.

Associated publications:
  • Hemin Omer Latif, Nasser Sherkat and Ahmad Lotfi, "TeleGaze: Teleoperation through Eye Gaze", 7th IEEE International Conference on Cybernetic Intelligent Systems 2008, London, UK. Conference website: www.cybernetic.org.uk/cis2008
  • Hemin Omaer Latif, Nasser Sherkat and Ahmad Lotfi, "Remote Control of Mobile Robots through Human Eye Gaze: The Design and Evaluation of an Interface", SPIE Europe Security and Defence 2008, Cardiff, UK. Conference website: http://spie.org/security-defence-europe.xml