deletions | additions
diff --git a/Methodology Video Data.tex b/Methodology Video Data.tex
index 70353a7..ec70a1e 100644
--- a/Methodology Video Data.tex
+++ b/Methodology Video Data.tex
...
\subsubsection{Trajectories: Positions in Space and Time (x,y,t)}
Trajectories are, at the simplest level, a series of points in
cartesian Cartesian space representing the position of (the center of) a moving object at time $t$ on a planar surface. Height $z$ is usually not considered. Points are evenly spaced in time with a consistent $\Delta t$ equivalent to the inverse of the framerate of the video. Typical framerates for video are between 15 to 30 frames per second, providing 15 to 30 observations per moving object per second. The object itself is represented by a mass of characteristic features closely spaced and moving in unison. Feature grouping is handle
Three potential sources of error exist:
paralax, parallax, pixel resolution, and tracking errors.
\begin{itemize}
\item
\textbf{Paralax \textbf{Parallax error} is mitigated by maximising the subtending angle between the camera and the height of tracked objects. In practical terms this requires a high view or ideally a bird's eye view, tracking objects with a small height to base ratio. Passenger cars are generally more forgiving in this respect than trucks or pedestrians.
\item \textbf{Pixel resolution} determines measurement precision. Objects further away from the camera experience lower tracking precision than objects near the camera. Error due to pixel resolution is mitigated by placing study areas nearer to the camera and using high-resolution cameras, although increases in resolution offer diminishing returns of tracking distance.
\item Finally, \textbf{tracking errors} may occur with scene visibility issues or due to limits with current computer vision techniques. These erroneous observations have to be rejected or reviewed manually. [CITE]
\end{itemize}
...
Velocity and acceleration measures are derived through differentiation from position and velocity over time respectively. These are 2-dimensional vectors with a magnitude (speed and acceleration) and a heading. The heading of the velocity vector is typically used to determine the orientation of the vehicle.
It should be noted however that each successive derivation increases pixel
precesion precession error for that measure. A velocity measure requires twice as many pixels as a position measurement. Similarly, an acceleration measurement requires three times as many pixels as a position measurement. This type of error can be compensated for with moving average smoothing over a short window (e.g. 5 frames). At this time, acceleration measurements are still too noisy to be useful for instantaneous observations. Higher camera resolutions
sould should solve this problem in future applications.
\subsubsection{Size of Data}
...
\begin{equation} \label{eqn:data-size} n = fQd \end{equation}
where $f$ is the number of frames per second of the video, $Q$ is the average hourly
flowrate, flow-rate, and $d$ is the average dwell time of each vehicle in the scene (excluding full stops). Dwell time is affected by the size of the analysis area in the scene and the average speed. As such, the size of the analysis area needs to be carefully selected. Furthermore,
overrepresentation over-representation of objects
traveling travelling below the average speed needs to be accounted for in all calculations. One option is to sample data per object with, for example, a simple mean, or alternatively to
resample re-sample observations by position instead of time. For a series of equally spaced points in a grid, hex map, or along a spline the
resampled re-sampled value $m'_j$ at the point $p_j$ is the average
\begin{equation} \label{eqn:resampling} m'_j = \frac{\sum_{i=1}^{n}{m_i} }{n} \end{equation}
...
\begin{equation} \label{eqn:resampling-constraint-spline} [(mS_i-pS_j) < (pS_{j+1}-pS_j)] \end{equation}
for a spline (see section \ref{alignments} for the coordinate $S$). This choice of
resampling re-sampling will vary from one context to the next.