this is for holding javascript data
Paul St-Aubin edited Methodology Video Data.tex
over 9 years ago
Commit id: f9910016294d62888e5ac989aa8fc10f0c8d21e5
deletions | additions
diff --git a/Methodology Video Data.tex b/Methodology Video Data.tex
index 8467bdf..f6cad2e 100644
--- a/Methodology Video Data.tex
+++ b/Methodology Video Data.tex
...
\subsubsection{Trajectories: Positions in Space and Time (x,y,t)}
Trajectories are a series of points in Cartesian space representing the position of (the center of) a moving object (road user) at time $t$ on a planar surface. Height $z$ is usually not considered. Points are evenly spaced in time with a consistent $\Delta t$ equivalent to the inverse of the framerate of the video, i.e. a measurement is done for each frame. Typical framerates for video are between 15 to 30 frames per second, providing 15 to 30 observations per moving object per second. The object (road user) itself is represented by a group of characteristic features spread over the object and moving in unison.
A sample of tracked road user trajectories is presented in image space in Figure~\ref{fig:conflict-video}. The computer vision software is covered in greater detail in section \ref{software}.
Three potential sources of error exist: parallax, pixel resolution, and tracking:
\begin{itemize}
\item \textbf{Parallax error} is mitigated by maximising the subtending angle between the camera and the height of tracked objects. In practical terms this requires a high
angle of view or ideally a bird's eye view, tracking objects with a small height to base ratio. Passenger cars are generally more forgiving in this respect than
trucks heavy vehicles or pedestrians.
\item \textbf{Pixel resolution} determines measurement precision. Objects further away from the camera experience lower tracking precision than objects near the camera. Error due to pixel resolution is mitigated by placing
study areas nearer to the camera
as close to the study area as possible and using high-resolution cameras,
although though increases in resolution offer diminishing returns of tracking
distance. accuracy.
\item Finally, \textbf{tracking errors} may occur with scene visibility issues or due to limits with current computer vision techniques. These erroneous observations have to be rejected or reviewed manually.
Some attempts have been made at validating and optimisaing tracking accuracy using search algorythms and MOT measures of performance \cite{ettehadieh15systematic}. This method is replicated for this study using a genetic algorythm. See section \ref{tracking_calibration} for results.
\end{itemize}
Depending on the steps taken to minimize tracking errors, feature-based tracking functions best over study areas of 50-100~m in length with high-to-medium speed, low-to-medium density flows.
A sample of road user trajectories is presented as they are tracked in image space in Figure~\ref{fig:conflict-video}. For more information on computer vision, see section \ref{software}.
\subsubsection{Derived Data: Velocity \& Acceleration}
Velocity and acceleration measures are derived through differentiation from position and velocity over time respectively. These are 2-dimensional vectors with a magnitude (speed and acceleration) and a heading. The heading of the velocity vector is typically used to determine the orientation of the vehicle.