Abstract
3D human hand pose estimation from visual data has received an
increasing amount of attention, and the availability of low-cost depth
cameras gives a great impetus to the development of this field. Nearly
all recent hand pose estimation methods are oriented towards unified
evaluation criteria defined by popular public benchmark datasets: the
ultimate goal is to reduce the estimation error. However, the fact is
that there exists a gap between human hand pose estimation and its
applications. It is unclear how to recover global and local degrees of
freedom from a set of structural hand joints, which is a necessary
condition to apply human hand pose estimation to teleoperation, i.e.,
mapping estimated human hand poses at the master side to robotic hand
poses at the slave side. Conventional teleoperation systems are
implemented with the aid of a data glove or essentially built on gesture
recognition. These solutions are inferior to vision-based hand pose
estimation in offering an easy-to-use and natural human-robot
interaction interface. In this paper, we propose three methods to
teleoperate robotic hands by 3D vision-based human hand pose estimation.
The feasibility of the three methods is tested in a simulated
environment.