this is for holding javascript data
Nicolas Saunier edited Methodology Video Data.tex
almost 10 years ago
Commit id: 9109213fd3e4aa37319ec6299c7418b6a0d95023
deletions | additions
diff --git a/Methodology Video Data.tex b/Methodology Video Data.tex
index 5938090..cb378e6 100644
--- a/Methodology Video Data.tex
+++ b/Methodology Video Data.tex
...
\subsection{Video Data}
Road user trajectories are extracted from video data using
computer vision. a feature-based tracking algorithm .
\subsubsection{Trajectories: Positions in Space and Time (x,y,t)}
Trajectories
are, at the simplest level, are a series of points in Cartesian space representing the position of (the center of) a moving object
(road user) at time $t$ on a planar surface. Height $z$ is usually not considered. Points are evenly spaced in time with a consistent $\Delta t$ equivalent to the inverse of the framerate of the
video. video, i.e. a measurement is done for each frame. Typical framerates for video are between 15 to 30 frames per second, providing 15 to 30 observations per moving object per second. The object
(road user) itself is represented by a mass of characteristic features
which are spread over the object, closely spaced and moving in unison.
Feature grouping is handle
Three potential sources of error exist: parallax, pixel resolution, and tracking errors.
...
\item Finally, \textbf{tracking errors} may occur with scene visibility issues or due to limits with current computer vision techniques. These erroneous observations have to be rejected or reviewed manually. [CITE]
\end{itemize}
Depending on the steps taken to
minimise minimize tracking areas, feature-based tracking functions best over study areas of 50-100m in length with high-to-medium speed, low-to-medium density flows.
A sample of road user trajectories is presented as they are tracked in image space in Figure~\ref{fig:conflict-video}. For more information on computer vision, see section \ref{software}.