562015-03-24 , 法政大学大学院理工学・工学研究科
We built a gesture recognition interface system, which can determine from among the objects projected on a display which one users selected by finger pointing actions, and evaluated its effectiveness and stability. When selecting a particular object by pointing at it, people generally place their fingertips on a virtual straight line that connects their eyes and the target object. Therefore, the position of the selected object projected on a display can be estimated from the measured values of the 3D positional coordinates of the user’s fingertip and eyes. To obtain the 3D information, we used our own composition of a new Time-Of-Flight (TOF) camera that directly acquires the target’s depth information and an ordinary Web camera as well as a commercially available Kinect (Kinect v2) image sensor by which both ordinary color and depth images are captured and integrated in calibrated form.