What can a self-driving car crash teach us about the politics of machine learning?

—Jack Stilgoe—

In May 2016, a Tesla Model S was involved in what could be considered the world’s first self-driving car fatality. In the middle of a sunny afternoon, on a divided highway near Williston, Florida, Joshua Brown, an early adopter and Tesla enthusiast, died at the wheel of his car. The car failed to see a white truck that was crossing his path. While in ‘Autopilot’ mode, Brown’s car hit the trailer at 74 mph. The crash only came to public light in late June 2016, when Tesla published a blog post, headlined ‘A tragic loss’, that described Autopilot as being ‘in a public beta phase’.

Self-driving cars, quintessentially ‘smart’ technologies, are not born smart. Their brains are still not fully formed. The algorithms that their creators hope will allow them to soon handle any eventuality are being continually updated with new data. The cars are learning to drive.

Self-driving cars represent a high-stakes test of the powers of machine learning, as well as a test case for social learning in technology governance. Society is learning about the technology while the technology learns about society. Understanding and governing the politics of this technology means asking ‘Who is learning, what are they learning and how are they learning?’

Proponents of self-driving cars see machine learning as a way of compensating for human imperfection. Not only are humans unreliable drivers, they seem to be getting worse rather than better. After decades of falling road death numbers in the US, mostly due to improved car design and safety laws, rates have been increasing since 2010, probably due to phone-induced distraction. The rationalists’ lament is a familiar one. TS Eliot has the doomed archbishop Thomas Becket say in ‘Murder in the cathedral’: ‘The same things happen again and again; Men learn little from others’ experience.’

Self-driving cars, however, learn from one another. Tesla’s cars are bought as individual objects, but commit their owners to sharing data with the company in a process called ‘fleet learning’. According to Elon Musk, the company’s CEO, ‘the whole Tesla fleet operates as a network. When one car learns something, they all learn it’, with each Autopilot user as an ‘expert trainer for how the autopilot should work’.

The promise is that, with enough data, this process will soon match and then surpass humans’ abilities. The approach makes the dream of automotive autonomy seem seductively ‘solvable’. It is also represents a privatization of learning.

As work by scholars such as Charles Perrow and Brian Wynne has revealed, technological malfunctions are an opportunity for the reframing of governance and the democratization of learning. The official investigations of and responses to the May 2016 Tesla crash represent a slow process of social learning.

The first official report of the May 2016 crash, from the Florida police, put the blame squarely on the truck driver for failing to yield the right of way. However, the circumstances of the crash were seen as sufficiently novel to warrant investigations by the National Transportation Safety Board (NTSB) and the National Highway Traffic Safety Administration (NHTSA). The NTSB is tasked with identifying the probable cause of every air accident in the US, as well as some highway crashes.

The NTSB’s preliminary report was matter-of-fact. It relates that, at 4:40pm on a clear, dry day, a large truck carrying blueberries crossed US Highway 27A in front of the Tesla, which failed to stop. The Tesla passed under the truck, sheering off the car’s roof. The collision cut power to the wheels and the car coasted off the road for 297 feet before hitting and breaking a pole, turning sideways and coming to a stop. Brown was pronounced dead at the scene. The truck was barely damaged.

The NHTSA saw the incident as an opportunity for a crash course in self-driving car innovation. Its Office of Defects Investigation wrote to Tesla demanding data on all of the company’s cars, instances of Autopilot use and abuse, customer complaints, legal claims, a log of all technology testing and modification in the development of Autopilot and a full engineering specification of how and why Autopilot does what it does.

In January 2017, the NHTSA issued its report on the crash. The agency’s initial aim was to ‘examine the design and performance of any automated driving systems in use at the time of the crash’. The technical part of their report emphasized that the Tesla Autopilot was a long way from full autonomy. A second strand of analysis focused on what the NHTSA called ‘human factors’. The agency chose to direct its major recommendation at users: ‘Drivers should read all instructions and warnings provided in owner’s manuals for ADAS [advanced driver-assistance systems] technologies and be aware of system limitations’. The report followed a pattern, familiar in STS, of blaming sociotechnical imperfection on user error: humans, as anthropologist Madeleine Clare Elish has described, become the ‘moral crumple zone’.

The NTSB went further, recognizing the opportunity to learn. The Board sought to clarify that the Tesla Model S didn’t meet the technical definition of a ‘self-driving car’, but blamed the confusion on the company as well as the victim. Its final word on the probable cause of the Tesla crash added a concern with Autopilot’s ‘operational design, which permitted [the driver’s] prolonged disengagement from the driving task and his use of the automation in ways inconsistent with guidance and warnings from the manufacturer’. Tesla, in the words of the NTSB chair, ‘did little to constrain the use of autopilot to roadways for which it was designed’.

Dominant approaches to machine learning still represent a substantial barrier to governance. When NTSB conducted its investigation, it found a technology that was dripping with data and replete with sensors, but offering no insight into what the car thought it saw nor how it reached its decisions. The car’s brain remained largely off-limits to investigators. At a board meeting in September 2017, one NTSB staff member explained: ‘The data we obtained was sufficient to let us know the [detection of the truck] did not occur, but it was not sufficient to let us know why.’ The need to improve social learning goes beyond accident investigation. If policymakers want to maximize the public value of self-driving car technology, they should be intensely concerned about the inscrutability and proprietary nature of machine learning systems.

This post is an excerpt from the paper Machine learning, social learning and the governance of self-driving cars, to appear in Social Studies of Science. Jack Stilgoe is a senior lecturer in Science and Technology Studies at University College London.