this is for holding javascript data
Nicolas Saunier edited section_Introduction_The_use_of__.tex
almost 9 years ago
Commit id: f272af1a1655f5992695142f64dbfd6c33c8bf29
deletions | additions
diff --git a/section_Introduction_The_use_of__.tex b/section_Introduction_The_use_of__.tex
index 88260ad..e874fb5 100644
--- a/section_Introduction_The_use_of__.tex
+++ b/section_Introduction_The_use_of__.tex
...
\section{Introduction}
The use of video data for automatic traffic data collection and analysis has been on an upward trend as more powerful computational tools, detection and tracking technology become available. Not only have video sensors been able to emulate inductive loops to collect basic traffic variables such as counts and speed as in the commercial system Autoscope \cite{michalopoulos91autoscope}, but they can also provide more and more accurately higher-level information regarding road user behavior and interactions. Examples include pedestrian gait parameters \cite{saunier11stride-length-trr}, crowd dynamics~\cite{johansson08crowd} and surrogate safety analysis applied to motorized and non-motorized road users in various road facilities~\cite{St_Aubin_2013,Sakshaug_2010,Autey_2012}. Video sensors are relatively inexpensive and easy to install or already installed for example by transportation agencies for traffic monitoring: large datasets can therefore be collected for large scale or long term traffic analysis. This so-called ``big data'' phenomenon
brings offers to better understand transportation systems, with its own challenges for data analysis~\cite{st-aubin15big-data}.
Despite the undeniable progress of the video sensors and computer vision algorithms in their varied transportation applications, there persists a distinct lack of large comparisons of the performance of video sensors in varied conditions such as the complexity of the traffic scene, the characteristics of cameras~\cite{Wan_201} and its installation (height, angle), the environmental conditions (e.g.\ the weather)~\cite{Fu_2015}, etc. This is particularly hampered by the poor characterization of the datasets used for performance evaluation, the limited availability of benchmarks and public video datasets for transportation applications~\cite{saunier14dataset}. Tracking performance is often reported using ad hoc and incomplete metrics such as ``detection rates'' instead of standard and more suitable metrics such as CLEAR MOT~\cite{Bernardin_2008}. Finally, the computer vision algorithms are typically manually adjusted by trial and error using a small dataset covering few conditions affecting performance while performance evaluated on the same dataset is thus over-estimated: compared to other fields such as machine learning, it should be clear that the algorithms should be systematically optimized on a calibration dataset, while performance should be reported for a separate validation dataset~\cite{ettehadieh15systematic}.
The improvement of computer vision is particularly important in surrogate safety analysis such as in \cite{St_Aubin_2013} where automated tracking of trajectories is used to evaluate While the
safety performance of
merge zones on highway ramps, in \cite{Sakshaug_2010} where the integration of cyclists and motor vehicles determined that accident statistics were not sufficient and video
detection was instead used, and in \cite{Autey_2012} where a video-based automated traffic conflict analysis was performed to evaluate a new design sensors for
right-turn lanes. \cite{Moreno_2013} performed a study evaluating speed as a measure of safety performance where specialized equipment needed to be installed. The Ministère des Transports du Québec already close to full video coverage of more simple traffic variables has been more extensively studied, not all
factors have been systematically analyzed and the
highways in issues with parameter optimization and the
city lack of
Montreal. With proper computer vision technology, speed profiles could be developed separate calibration and
evaluated at larger scale than was previously feasible. validation datasets abound. Besides, the relationship of tracking performance with performance for traffic parameters has never been investigated.
While flow rates and speed counts can be assessed through various means, computer vision is one The objective of
this paper is first to improve the
most efficient performance of existing detection and tracking methods
for video data in terms of
measuring other roadway interactions. Many studies are seeking to analyze microscopic movements accuracy of tracking, but also different kinds of traffic data such as
\cite{St_Aubin_2013} in roundabouts counts, speeds, gaps and
\cite{Hill_2015} in relation to driver behavior. At this scale, accurately representing road user interactions. This is done through the
exact characteristics optimization of
tracking parameters using a genetic algorithm comparing the
moving vehicle tracker output with manually annotated trajectories. The method is
crucial applied to
provide accurate analyses a set of
traffic videos extracted from a large surrogate safety
indicators such as gap time, time-to-collision(TTC) and user pairs. Additionally, the performance study of
computer vision tools are not consistent depending on several roundabout merging zones~\cite{st-aubin15big-data}, covering factors such as the
distance of road users to the camera, two types of cameras, the camera resolution and
type two weather conditions. The second objective is to explore the transferability of
camera \cite{Wan_2014} parameters for separate datasets with the same properties (consecutive video samples) and
across different properties, by reporting how optimizing tracking for one condition impacts performance in terms of tracking and traffic parameters for the
weather conditions \cite{Fu_2015}. other conditions. This paper is a follow up on \cite{ettehadieh15systematic} that investigates more factors and how tracking performance is related to the accuracy of traffic parameters.
The objective of this This paper is
to improve the performance of feature-based tracking in merging zones of roundabouts through parameterization using a genetic algorithm comparing tracked trajectories to sets of manually annotated ground truths. There is an emphasis on cross-validating sets of parameters on separate datasets with similar properties, as well organized as
exploring follows: it provides in the
performance on datasets with different properties. This paper will provide next section a brief overview of the current state of computer vision and calibration in traffic applications, then
detail presents the
detailed methodology including the ground truth inventory, measures of performance and calibration procedure, followed by a presentation and discussion of the results
and summarized in the conclusion to conclude with
a summary and recommendations for future research.
big data follow up on \cite{ettehadieh15systematic}
Despite the progress in the applications, there is a surprising lack of public data, benchmarks, etc as noted in \cite{saunier14dataset}