Who’s Liable for Accidents Caused by Autonomous Vehicles?

At The American Institute for Economic Research Caleb Fuller argues that we should just let the market decide on fully autonomous vehicles:

Discussions of how regulation could “get in front of” self-driving cars are therefore incomplete, and ultimately, may cost lives. According to the National Highway Traffic Safety Administration, over 42,000 people perished on U.S. roads in 2021. What that implies is that self-driving cars would be an improvement if, with autonomous vehicles widely prevalent, “only” 41,000 people were to perish in car accidents.

To put this even more starkly, were those numbers accurate, it would imply that every year regulators delay because driverless cars are not yet perfectly safe, they would be killing a thousand people on net.

My point is not that I know what these numbers are, nor am I an expert on the regulatory hurdles these vehicular innovations must overcome. Rather, I wish to make the more general, conceptual point that net deaths may occur due to regulators’ insisting on making self-driving cars safer.

Ex ante regulation of the type being discussed for driverless vehicles, stipulates ahead of time the specifications a product must comply with. It necessarily invokes an arbitrary set of safety standards. It also short-circuits the local, tacit knowledge that producers have about how to make their products or production processes safer. Ironically, safety regulation can make us less safe, for precisely this reason.

I don’t know how to navigate the trade-offs inherent in creating a risky product (i.e. any product). Neither do you. But markets do.

I’m going to divide my remarks into three sections: vehicle autonomy, liability, and proposals.

The international Society of Automotive Engineers (SAE) has devised the following classification scheme for vehicle autonomy, levels from 0 to 5:

Level 0 No automation
Level 1 Driver assistance
Level 2 Partial automation
Level 3 Conditional automation
Level 4 High automation
Level 5 Full automation

Presently, most “autonomous” vehicles are at Level 2 or Level 3; Waymo claims to have built a vehicle at Level 4. In 2010 I made a cash wager that there would be fewer than 10 street legal Level 5 vehicles on the road in the United States by 2020. I collected. Level 5 vehicles are not expected for 10 to 20 years but I wouldn’t be a bit surprised if fully autonomous vehicles are not added to the list of things which, like practical nuclear fusion, always seem to be 10-20 years away. I note that the most recent success with nuclear fusion which appeared tantalizingly successful, does not appear to be reproducible.

In terms of liability nearly all automobile accidents are the result of driver error. The balance are either manufacturer defect or act of God. To the best of my knowledge those are the alternatives—there aren’t any others.

Volvo has, correctly in my view, taken the position that all automobile accidents involving autonomous vehicles are the result of a failure of workmanship.

Now to my proposal. IMO Mr. Fuller is wrong in one particular. There is a fundamental difference between motor vehicle accidents involving ordinary vehicles and those involving autonomous vehicles. As long as the vehicles are being operated according to manufacturer recommendations and the vehicles are maintained according to manufacturer recommendations all accidents involving autonomous vehicles are the result of a failure of workmanship on the part of the manufacturer which, as I note, is the view taken by Volvo.

Therefore my proposal can be summed up in two words: strict liability. Manufacturers should be held strictly liable for all accidents involving autonomous vehicles. That means that no motive or recklessness need be proven. Only that there was an accident.

So, in a sense, I’m coming down on the same side as Mr. Fuller. The market can handle it. But only if it must. Insurance companies are not charitable organizations. When there’s no practical way an owner of a flawed vehicle can prove that the vehicle was at fault and nothing that owners can do (other than not owning an autonomous vehicle) to avoid liability, any accident involving them will be blamed on the owner and not covered by the insurance company. Or insuring those vehicles will be prohibitively expensive. It will strongly discourage the ownership of autonomous vehicles. Consequently, strict liability on the part of manufacturers is a good way to encourage the development, sale, and purchase of autonomous vehicles, getting the benefits of autonomous vehicles that Mr. Fuller notes.

5 comments… add one
  • CuriousOnlooker Link

    I don’t believe strict liability is the barrier to adoption.

    Given technological changes in cars (ubiquitous telemetrics, i.e. data collection of car operation including video); changes in automaker business models (focus on making money on subscriptions and other post-sale services e.g. Onstar, Toyota’s service connect, dealer maintenance); and flexible software updates; automakers could sell autonomous technology as a subscription, included in the subscription is liability coverage.

    Imagine a world where GM sells an autonomous-equipped Cadillac. But to enable autonomous mode, the customer has to subscribe to “Onstar Platinum” (which is roughly the price of insurance). Onstar Platinum covers any liability from the operation of autonomous mode — as long as you follow conditions of which the most important two is never turning off/tempering with telemetrics and never refusing software updates.

    My rough rule of thumb is if car makers can make autonomous mode safer then the average human driver; they can offer autonomous subscriptions for cheaper then traditional insurance — which would give them a new lucrative revenue stream.

    As a foreshadow — I’ve seen one automaker’s app has a link which indicates they are getting into the auto insurance business; using their telemetric data on driving behavior to generate attractive rates from partner insurance firms.

  • Andy Link

    Yeah, I agree with Curious. I’m sensitive right now to insurance costs because I have two teenagers in the house. This basically doubles our car insurance.

    And with autonomous vehicles, the rates of fatal accidents (and accidents generally) will decrease, reducing insurance rates (especially for teenagers)

    The big question is who will pay for the insurance. The model that curious proposes is an interesting one, but there are others.

  • TastyBits Link

    The place to start would be vehicles where the maximum number of parameters can be controlled – trains and subways. Vehicles using set routes would next – buses and mail delivery. Markers and sensors could be placed under, over, or aside existing roadways.

    Unless self-driving vehicle are networked, they are not exempt from higher level decision making, and unfortunately, a fully formed decision tree would be far too complex. So, the vehicles will need to be as smart as the average driver, and with average smarts, you will get average drivers.

    Compared to the human brain, the best supercomputer is an idiot. It took millions of years to evolve from the single cell organism to the reckless driver, and I doubt reckless driver’s are going to program a better driver anytime soon.

    Autonomous action by an AI will require the ability to make dangerous and potentially catastrophic decisions. An AI that could not harm a human could not be a cop or doctor. Actually, it would be almost useless. So, it would need to not harm a human “very much”.

  • Grey Shambler Link

    I think that liability will be complicated by human error and aggression.
    Not only autonomous vehicles, but AV’s that can anticipate human peccadilloes.
    If a HD swerved right habitually before a left turn,
    If an angry HD cuts in and brakes suddenly in front of the AV.
    Or does so for financial gain.

  • abe Link

    Hmmmm, so the subject is self driving, computer controlled cars? Well, lets say two cars are headed in opposite directions, something goes wrong and the computers, being aware of who is in each car, reaches the conclusion that it can save only one car. Having access to The Big Data Base, it knows one car has important, highly skilled individuals,….doctors, business executives, politicians, etc. And, the other car has an equal number of individuals, but while they are all good fellows, they are just general laborers. Will the computers contain algorithms to save the individuals of greater “value” to society, or will it perhaps just flip a coin? Note: You could run this question using one car full of Democrats, one with Republicans, and one with Independents! What say you all?

Leave a Comment