Eye-hand coordination is a primordial reach-to-grasp action performed by a human hand when reaching for an object. This paper proposes the use of a visual sensor which allows the simultaneous analysis of hand and eye motions in order to recognize the reach-to-grasp movement, i.e. to predict the grasping gesture. This solution fuses two viewpoints taken from the user's perspective. First, by using an eye-tracker device attached to the user's head; and second, by utilizing a wearable camera attached to the user's hand. The information from these two viewpoints is used to characterize multiple hand movements in conjunction with eye-gaze movements through a Hidden-Markov Model framework. In various experiments, we show that combining these two sources of information allows the prediction of a reach-to-grasp movement as well as the desired object.