Potential disadvantages
A direct impact of widespread adoption of automated vehicles is the loss
of driving-related jobs in the road transport industry. There could be
resistance from professional drivers and unions who are threatened by
job losses. In addition, there could be job losses in public transit
services and crash repair shops. The automobile insurance industry might
suffer as the technology makes certain aspects of these occupations
obsolete. A frequently cited paper by Michael Osborne and Carl Benedikt
Frey found that automated cars would make many jobs redundant.
Privacy could be an issue when having the vehicle’s location and
position integrated into an interface in which other people have access
to. In addition, there is the risk of automotive hacking through the
sharing of information through V2V (Vehicle to Vehicle) and V2I (Vehicle
to Infrastructure) protocols. There is also the risk of terrorist
attacks. Self-driving cars could potentially be loaded with explosives
and used as bombs.
The lack of stressful driving, more productive time during the trip, and
the potential savings in travel time and cost could become an incentive
to live far away from cities, where land is cheaper, and work in the
city’s core, thus increasing travel distances and inducing more urban
sprawl, more fuel consumption and an increase in the carbon footprint of
urban travel. There is also the risk that traffic congestion might
increase, rather than decrease. Appropriate public policies and
regulations, such as zoning, pricing, and urban design are required to
avoid the negative impacts of increased suburbanization and longer
distance travel.
Some believe that once automation in vehicles reaches higher levels and
becomes reliable, drivers will pay less attention to the road. Research
shows that drivers in automated cars react later when they have to
intervene in a critical situation, compared to if they were driving
manually. Depending on the capabilities of automated vehicles and the
frequency with which human intervention is needed, this may counteract
any increase in safety, as compared to all-human driving, that may be
delivered by other factors.
Ethical and moral reasoning come into consideration when programming the
software that decides what action the car takes in an unavoidable crash;
whether the automated car will crash into a bus, potentially killing
people inside; or swerve elsewhere, potentially killing its own
passengers or nearby pedestrians. A question that programmers of AI
systems find difficult to answer (as do ordinary people, and ethicists)
is “what decision should the car make that causes the Ďsmallestí damage
to people’s lives?”
The ethics of automated vehicles are still being articulated, and may
lead to controversy. They may also require closer consideration of the
variability, context-dependency, complexity and non-deterministic nature
of human ethics. Different human drivers make various ethical decisions
when driving, such as avoiding harm to themselves, or putting themselves
at risk to protect others. These decisions range from rare extremes such
as self-sacrifice or criminal negligence, to routine decisions good
enough to keep the traffic flowing but bad enough to cause accidents,
road rage and stress.
Human thought and reaction time may sometimes be too slow to detect the
risk of an upcoming fatal crash, think through the ethical implications
of the available options, or take an action to implement an ethical
choice. Whether a particular automated vehicle’s capacity to correctly
detect an upcoming risk, analyze the options or choose a ‘good’ option
from among bad choices would be as good or better than a particular
human’s may be difficult to predict or assess. This difficulty may be in
part because the level of automated vehicle system understanding of the
ethical issues at play in a given road scenario, sensed for an instant
from out of a continuous stream of synthetic physical predictions of the
near future, and dependent on layers of pattern recognition and
situational intelligence, may be opaque to human inspection because of
its origins in probabilistic machine learning rather than a simple,
plain English ‘human values’ logic of parsable rules. The depth of
understanding, predictive power and ethical sophistication needed will
be hard to implement, and even harder to test or assess.
The scale of this challenge may have other effects. There may be few
entities able to marshal the resources and AI capacity necessary to meet
it, as well as the capital necessary to take an automated vehicle system
to market and sustain it operationally for the life of a vehicle, and
the legal and ‘government affairs’ capacity to deal with the potential
for liability for a significant proportion of traffic accidents. This
may have the effect of narrowing the number of different system
operators, and eroding the presently quite diverse global vehicle market
down to a small number of system suppliers.