Exploiting eye-hand coordination to detect grasping movements

Miguel Carrasco, Xavier Clady

Research output: Contribution to journalArticlepeer-review

9 Scopus citations


Human beings are very skillful at reaching for and grasping objects under multiple conditions, even when faced with an object's wide variety of positions, locations, structures and orientations. This natural ability, controlled by the human brain, is called eye-hand coordination. To understand this behavior it is necessary to study both eye and hand movements simultaneously. This paper proposes a novel approach to detect grasping movements by means of computer vision techniques. This solution fuses two viewpoints, one viewpoint which is obtained from an eye-tracker capturing the user's perspective and a second viewpoint which is captured by a wearable camera attached to a user's wrist. Utilizing information from these two viewpoints it is possible to characterize multiple hand movements in conjunction with eye-gaze movements through a Hidden-Markov Model framework. This paper shows that combining these two sources makes it possible to detect hand gestures using only the objects contained in the scene even without markers on the surface of the objects. In addition, it is possible to detect which is the desired object before the user can actually grasp said object.

Original languageEnglish
Pages (from-to)860-874
Number of pages15
JournalImage and Vision Computing
Issue number11
StatePublished - Nov 2012


  • Grasping movements
  • Hand gesture
  • Hand posture
  • Motion analysis
  • Object recognition
  • Visual system


Dive into the research topics of 'Exploiting eye-hand coordination to detect grasping movements'. Together they form a unique fingerprint.

Cite this