Christine Perez edited subsection_Related_Studies_on_Kinect__.tex  about 8 years ago

Commit id: f0d5144b12602a656485dc00bcff7498372aca13

deletions | additions      

       

\subsection{Related Studies on Kinect Comparison}  The gaming industry is pushing toward the use of marker less optical systems for easily tracking people, objects and spaces with an acceptable computational effort and a convenient hardware investment. Major improvements have been carried out after the first solution successfully introduced into the home video game market, \cite{larssen2004understanding} i.e., Microsoft Kinect to acquire player’s kinematic to control a video game. Inthe last decade, several new range sensing devices have been developed and have been made available for application development at affordable costs. In  2010, Microsoft, in cooperation with PrimeSense released a structured-light(SL) based range sensing camera, the so-called KinectTM, that delivers reliable depth images at VGA resolution at 30Hz,coupled with an RGB-color camera at the same image resolution. Even though the camera was mainly designed for gaming, it achieved great popularity in the scientific community where researchers have developed a huge amount of innovative applications that are related to different fields such as Online 3D re-construction, medical applications and health care, augmented reality. \subsubsection{Kinect's Structured Light Sensing Principles}  Even though the principle of structured light(SL) range sensing is comparatively old, the launch of the Microsoft KinectTM(KinectSL) in 2010 as interaction device for the XBox360 clearly demonstrates the maturity of the underlying principle. The structured light approach is an active stereo-vision technique. A sequence of known patterns is sequentially projected on to an object, which gets deformed by geometric shape of the object. The object is then observed from a camera from a different direction. By analyzing the distortion of the observed pattern,i.e.the disparity from the original projected pattern, depth information can be extracted; see Figure 1. \cite{sarbolandi2015kinect}