Safety
At least 90% of vehicle accidents are estimated to be the result of human error. Adopting AVs can potentially reduce or eliminate the largest cause of car accidents while also outperforming human drivers in perception, decision-making and execution. However, AVs introduce new safety issues. Collingwood and Litman highlight that vehicle occupants may reduce seatbelt use and pedestrians may become less cautious due to feeling safer. Also, the elimination of human error does not imply the elimination of machine error. As the technology grows in complexity, so does the probability of technical errors compromising vehicle safety. The fatal crash of Tesla’s autopilot in 2016 reveals the uncertainty of machine perception and highlights the technology’s inability to avoid accidents in certain scenarios. Concerns also arise regarding how AVs should be programmed by “crash algorithms” to respond during unavoidable accidents. Due to the “lack of blame”, the damage caused by AVs in accidents cannot be assessed subjectively, which necessitates rules to regulate AVs’ reactions to moral dilemmas. However, it is unclear how to arrive at these rules. Algorithms may be programmed to prioritise the safety of the AVs’ occupants “over anything else”, which ensures the economic viability of developing AVs, but using the individual self-interest of AV occupants as a basis to justify the harm inflicted on others undermines the functions of law itself. In contrast, algorithms may be programmed to achieve the most socially beneficial decision based on a range of factors, but how to arrive at these factors is still unclear. Also, regulators have yet to agree on an acceptable level of safety or define legitimate methods of determining the safety of AVs. AVs’ performance could improve over time with real-world driving experience, but this is only possible if the public accepts the technology.