The US National Transportation Safety Board (NTSB) has released the preliminary findings of its investigation into the fatal accident in March involving an autonomous Uber test vehicle and a pedestrian.
Uber announced on 23 May that it was formally ending its driverless test programme in Arizona, where the accident took place.
While ‘probable cause’ has been left blank by investigators, whose work is ongoing, their initial report makes it clear that the accident was due to poor object recognition and system design, and Volvo’s own emergency braking and driver assistance technologies being disengaged by Uber’s software. These problems were compounded by the safety driver not seeing the woman until it was too late.
Of most concern to Uber, in the context of the pedestrian’s death, will be the finding that “all aspects” of the self-driving system were operating normally. The safety driver – who was criticised in March reports for not looking at the road until the moment before impact – was monitoring Uber’s self-driving interface, says the report, and not ignoring her responsibilities or checking her phone, as some commentators had suggested.
The report adds, “According to Uber, the developmental self-driving system relies on an attentive operator to intervene if the system fails to perform appropriately during testing. In addition, the operator is responsible for monitoring diagnostic messages that appear on an interface in the centre stack of the vehicle dash and tagging events of interest for subsequent review.”
According to the safety driver, an unnamed 44-year-old woman, that is precisely what she was doing.
Many technologies = one fatality
The accident occurred just before 10pm on 18 March when a modified Volvo XC90 running under computer control struck and killed 49-year-old Elaine Herzberg as she crossed Mill Avenue in Tempe, Arizona, with her bicycle.
According to the report, Uber had equipped its test vehicle with a “developmental self-driving system, consisting of forward- and side-facing cameras, radar, LIDAR, navigation sensors, and a computing and data storage unit integrated into the vehicle”, along with an aftermarket camera system that provided multiple views of the road and driver.
“The self-driving system relies on an underlying map that establishes speed limits and permissible lanes of travel,” continues the report. “The system has two distinct control modes: computer control and manual control. The operator can engage computer control by first enabling, then engaging the system in a sequence similar to activating cruise control. The operator can transition from computer control to manual control by providing input to the steering wheel, brake pedal, accelerator pedal, a disengage button, or a disable button.”
The vehicle was also factory equipped by Volvo with several driver-assistance systems. These included a collision avoidance function with automatic emergency braking, known as City Safety, as well as functions for detecting driver alertness and road sign information.
However, all of these Volvo functions are “disabled when the test vehicle is operating in computer control, but are operational when the vehicle is operated in manual control”, explains the report.
How the crash happened
According to data obtained from the self-driving system, the system first registered radar and LIDAR observations of the pedestrian about six seconds before impact, when the vehicle was traveling at 43 mph. By Internet of Business’ calculations, this means Herzberg was approximately 115 metres away when the system detected her.
The report says that as the vehicle and pedestrian paths converged, the self-driving software first classified the pedestrian as “an unknown object”, then as “a vehicle”, and then “as a bicycle with varying expectations of future travel path”.
At 1.3 seconds before impact (when the car would have been roughly 20 metres from Herzberg), the self-driving system determined that an emergency braking manoeuvre was needed to avoid collision.
Unfortunately for Herzberg, emergency braking is not enabled in Uber’s test vehicles when they are operating under computer control. As a result, safety drivers have to intervene and take action themselves. However, the system isn’t designed to alert the driver, according to the NTSB, who on this occasion was monitoring the diagnostic screen in the centre dash.
Uber’s system data showed that the safety driver attempted to take action less than one second before impact by grabbing the steering wheel.
Internet of Business says
Elaine Herzberg’s death was an avoidable tragedy, and was neither an inevitable loss on the long journey to full vehicle autonomy, nor a minor statistic compared to the tens of thousands who die on the road in the US every year.
Unless the NTSB uncovers new information before its final report, it’s clear that an interface that obliges safety drivers to look away from the road, poor object recognition, no emergency braking alert, and the disengagement of safety systems that might have avoided the accident, combined to take her life.
While the report implies that Herzberg took a personal risk by crossing the road in a poorly lit area, while wearing dark clothing and wheeling a bike with no side reflectors (but with front and rear lights activated), she should not be seen as somehow responsible for her own death, any more than a victim of violent crime is responsible for their attacker.
To succeed, autonomous systems, especially those informed by LIDAR and other sensors, need to (and should) be able to detect people and objects regardless of light conditions, and improve on human beings’ sight and judgement. And they need to be able to do so in any weather, on any type of road, and with unexpected events taking place around them.
Machines need to fit into the messy, complex human world. It’s not our job to fit into the realm of the machines.