Christine Perez edited Additionally_Kinect_Sensors_can_also__.tex  about 8 years ago

Commit id: e273a23016fc6da98411ae4f1159984826bc528a

deletions | additions      

       

  According to \citet{nissimov2015obstacle}, obstacle detection is completed independently for every video frame based on the depth and color maps. The first step is computing the slope of the depth image pixels. The Kinect outputs both color and depth, in 640 x 480 pixel images. Both image color and depth images have an angular view of 57° horizontally and 43° vertically. The working range of the sensor is in the middle of 0. 5 m with 5. 0 m (Khoshelham and Elberink, 2012), and the depth image pixels which would not included in this operation range are ignored. Vertical slope is an indication of obstacle. Pixels not surpassing a slope threshold named as surface areas and all other pixels are marked as suspected obstacles.    Preventing and avoiding an undesired collision is indeed the safest approach for human-robot coexistence. Themain information needed by any on-line collision avoidance algorithm is the  relative distance between the robot and some obstacle in its workspace, as obstacle, is the main data needed by a collision avoidance algorithm which is  acquired by an  exteroceptive sensors either fixed in the environment or mounted on the robot. The performance of the algorithm depends is  also dependent  on the fast processing capability of the sensor data. In \citet{Flacco_2012}'s study, he have proposed a new efficient method for estimating obstacle-to-robot distances that works directly in the depth space associated to a depth sensor (e.g., a Kinect monitoring the HRI scene).   A collision detection system machine  based on a velocity-distance bound algorithm was implemented. The system consist of three components: a solid modeler, a kinematic simulator, and a collision detection control module. The solid modeler defines the solids, it derives approximation to the solids, in to a given positions, the solids' geometry transforms, and for any two solids, it provides the previously discussed extended distance function. The simulator, using kinematic models for robot links, it defines the where all of the solids are placed. and provides an interactive example and user interface. THe collision detection module will use the simulator to define where the solids' are placed at a time t once invoked by the user using the simulator, use the modeler, as discussed previously, to find the extended distance function values for the solids at time t, choose a $dt$ to repeat the simulation/detection cycle at $t + dt$ \cite{culley1986collision}.