Comparison of Various Time-to-Collision Prediction and Aggregation Methods for Surrogate Safety Analysis
Traditional methods of road safety analysis rely on direct road accident observations, data sources which are rare and expensive to collect and which also carry the social cost of placing citizens at risk of unknown danger. Surrogate safety analysis is a growing discipline in the field of road safety analysis that promises a more pro-active approach to road safety diagnosis. This methodology uses non-crash traffic events and measures thereof as predictors of collision probability and severity (Tarko 2009) as they are significantly more frequent, cheaper to collect, and have no social impact.
Time-to-collision (TTC) is an example of an indicator that indicates collision probability primarily: the smaller the TTC, the less likely drivers have time to perceive and react before a collision, and thus the higher the probability of a collision outcome. Relative positions and velocities between road users or between a user and obstacles can be characterised by a collision course and the corresponding TTC. Meanwhile, driving speed (absolute speed) is an example of an indicator that measures primarily collision severity. The higher the travelling speed, the more stored kinetic energy is dissipated during a collision impact (Elvik 2005, Aarts 2006). Similarly, large speed differentials between road users or with stationary obstacles may also contribute to collision severity, though the TTC depends on relative distance as well. Driving speed is used extensively in stopping-sight distance models (Olson 1984), some even suggesting that drivers modulate their emergency braking in response to travel speed (Fambro 2000). Others content that there is little empirical evidence of a relationship between speed and collision probability (Hauer 2009).
Many surrogate safety methods have been used in the literature, especially recently with the renewal of automated data collection methods, but consistency in the definitions of traffic events and indicators, in their interpretation, and in the transferability of results is still lacking. While a wide diversity of models demonstrates that research in the field is thriving, there remains a need of comparison of the methods and even a methodology for comparison in order to make surrogate safety practical for practitioners. For example, time-to-collision measures collision course events, but the definition of a collision course lacks rigour in the literature. Also lacking is some systematic validation of the different techniques. Some early attempts have been made with the Swedish Traffic Conflict Technique (Hydén 1987) using trained observers, though more recent attempts across different methodologies, preferably automated and objectively-defined measures, are still needed. Ideally, this would be done with respect to crash data and crash-based safety diagnosis. The second best method is to compare the characteristics of all the methods and their results on the same data set, but public benchmark data is also very limited despite recent efforts (Ardo 2014).
The objectives of this paper are to review the definition and interpretation of one of the most ubiquitous and least context-sensitive surrogate safety indicators, namely time-to-collision, for surrogate safety analysis using i) consistent, recent, and, most importantly, objective definitions of surrogate safety indicators, ii) a very large data set across numerous sites, and iii) the latest developments in automated analysis. This work examines the use of various motion prediction methods, constant velocity, normal adaptation and observed motion patterns, for the TTC safety indicator (for its properties of transferability), and space and time aggregation methods for continuous surrogate safety indicators. This represents an application of surrogate safety analysis to one of the largest data sets to date.
The earliest attempt to implement surrogate safety analysis was manifested in the traffic conflict technique (TCT). The TCT was conceived at General Motors in the 60’s (Perkins 1968) and was adapted soon after in many different countries, particularly England (Spicer 1973, Grayson 1984), Sweden (Hydén 1984), Israel (Hakkert 1984), and Canada in the 70’s and 80’s. The TCTs provide conceptual and operational definitions of traffic events and safety indicators and methods to interpret the field observations for safety diagnosis. TCTs allow to categorize traffic events by risk of collision according to a set of guidelines developed to train observers for field manual data collection. Unfortunately, these efforts have not fully matured as several problems have persisted with reproducibility, non-transferability, subjectivity of observations, and data collection cost (Hauer 1978, Williams 1981, Kruysse 1991, Chin 1997).
There has been some resurgence in the field lately (Gettman 2003), with efforts to modernize the technique by automating the data collection and analysis, particularly using video data and computer vision (Laureshyn 2009). A variety of indicators and analysis methods have been proposed, however the field faces the same problems of non-transferability of results without some level of reliability testing (Tarko 2009, Saunier 2014).
There is a wide variety of safety indicators presented in the literature (Tarko 2009). Too many, in fact, for many of these indicators are often study-specific or site-specific and as such suffer from the same problems of non-transferability and non-reproducibility as the TCTs. Instead, the following indicators are proposed for their ubiquity in the literature and generalizable properties related to all types of traffic behaviour in any traffic safety study of any type of road infrastructure:
Speed requires no introduction as a behaviour measure as its effects on collision severity are already well established and well researched in the literature (Elvik 2005, Aarts 2006). However, its usefulness as a predictor of collision probability is still questionable, with some in favour (Elvik 2005, Aarts 2006) and others against (Hauer 2009), and does not offer perfect transferability as geometric factors and exposure come into play. We know that accident rates do not always scale linearly with speed (Aarts 2006), e.g. when comparing highways and intersections.
Time-to-collision (TTC), first proposed by (Hayward 1971), is an indicator describing the time remaining for two road users (or a road user and an obstacle) on a collision course to collide. It relies on a motion prediction method. TTC is measured continuously and can evolve over time if road users take evasive action and change collision course. The dimension of TTC is time and it decreases over time at a one-to-one ratio if the initial conditions of the collision course remain unchanged for lack of driver action or reaction. As such, it is generally accepted in the literature as a potential substitute for collisions resulting from driver errors and is typically proposed as a trigger for collision-avoidance systems (van der Horst 1993). Its interpretation is that lower TTCs are more likely to be associated with a probability of collisions. In fact, a TTC of exactly 0 is a collision by definition. TTCs can manifest themselves in virtually every type of driving scenario and as such are the ideal candidates for transferability.
Post-encroachment time (PET) is the time between successive arrivals at the same point in space by two road users (Allen 1978, Laureshyn 2010). Interactions with a measurable PET are very common at intersections, but not necessarily in other environments (notably highways (St-Aubin 2013)), which could make comparisons difficult between different classes of road infrastructure. While PET is computed once for a pair of road users from observed trajectory data, predicted PET is computed continuously based on motion prediction. As they share the same dimension of time, PET has the same interpretation of safety as TTC, although possibly not the same magnitude of impact.
TTC depends on robust motion prediction methods, i.e. the ability to predict possible future positions of moving objects according to a set of consistent, context-aware, and rigorous definitions of natural motion. They typically explore situations, large and small, in which road users find themselves on potential collision courses with others or obstacles, and measure the expected time of arrival at the potential collision point. In the strictest sense, the collision of two moving bodies predicted 10 or even 60 seconds into the future constitutes a collision course, however, such times are so large that i) the prediction model is probably inaccurate, and ii) road users are more than capable of correcting their course in this time. A number of methods are employed in robotics, computer vision, and transportation applications to predict natural motion with various criteria such as accuracy, performance, effective time horizon (Sivaraman 2013), but a few stand out for their suitability for surrogate safety modelling and recent applications:
Constant velocity is the most simple motion prediction model, wherein vehicles are projected along straight paths at a constant speed and heading using the velocity vector at that moment in time. This models simple Newtonian motion where no driver action is applied to the motion of the bodies in reaction to some event or navigational decision making.
This model is the simplest and most commonly used, often implicitly and without justification, but it also makes the most assumptions: only one movement is predicted at every instant (dependant on velocity vector), it does not depend on the context (road geometry or traffic), and driver actions are assumed to be the only sources of forces acting on a moving object (it does not account for friction or wheels already engaged in a rotation). These assumptions may be adequate for specific applications of the methodology, e.g. highways (St-Aubin 2013), but not all. The current implementation is based on (Laureshyn 2010).
Normal adaptation uses the initial velocity vector at the prediction moment to project trajectories, but modifies the velocity vector to account for normal driver variation iteratively from that initial velocity. This model is probabilistic and benefits from a wider range of possible outcome velocity vectors, but otherwise suffers from dependency on many of the same assumptions as the constant velocity prediction method. The implementation of normal adaptation studied is based on (Mohamed 2013).
Motion patterns are a family of models which use machine learning techniques to calculate future position likelihoods from past behaviour (Saunier 2007, Morris 2008, Sivaraman 2013). This type of model is the most promising, as motion prediction is probabilistic in nature and inherently models naturalistic behaviour. However, motion patterns are also more complex to implement and expensive to process, requiring training data encompassing the space where all collision courses may occur. The type of motion pattern being studied for implementation is a simple, supervised, discretized probability motion pattern matrix (St-Aubin 2014).
The source code for the calculation of all of these indicators is (or will be) available in the open-source project “Traffic Intelligence” (Jackson 2013).
It should also be noted that motion prediction methods that take into account several paths that may lead road users to collide also mode