loading page

Applying 3D Human Hand Pose Estimation to Teleoperation
  • +1
  • Siwei Ma,
  • Rui Li,
  • Haopeng Lu,
  • Zhenyu Liu
Siwei Ma
National Engineering Research Center of Visual Technology School of Computer Science Peking University Beijing 100871 China

Corresponding Author:[email protected]

Author Profile
Rui Li
National Engineering Research Center of Visual Technology School of Computer Science Peking University Beijing 100871 China
Author Profile
Haopeng Lu
Cooperative Medianet Innovation Center Shanghai Jiaotong University Shanghai 200030 China
Author Profile
Zhenyu Liu
State Key Lab of CAD&CG Zhejiang University Hangzhou 310027 China
Author Profile

Abstract

3D human hand pose estimation from visual data has received an increasing amount of attention, and the availability of low-cost depth cameras gives a great impetus to the development of this field. Nearly all recent hand pose estimation methods are oriented towards unified evaluation criteria defined by popular public benchmark datasets: the ultimate goal is to reduce the estimation error. However, the fact is that there exists a gap between human hand pose estimation and its applications. It is unclear how to recover global and local degrees of freedom from a set of structural hand joints, which is a necessary condition to apply human hand pose estimation to teleoperation, i.e., mapping estimated human hand poses at the master side to robotic hand poses at the slave side. Conventional teleoperation systems are implemented with the aid of a data glove or essentially built on gesture recognition. These solutions are inferior to vision-based hand pose estimation in offering an easy-to-use and natural human-robot interaction interface. In this paper, we propose three methods to teleoperate robotic hands by 3D vision-based human hand pose estimation. The feasibility of the three methods is tested in a simulated environment.