The US National Transportation Safety Board (NTSB) has issued a preliminary report on the fatal March 28, 2018 ‘self-driving’ vehicle accident. The preliminary findings support our own assessment of the incident[1]. It included additional details on the vehicle and its autonomous driving technology:
“Uber had equipped the test vehicle with a developmental self-driving system. The system consisted of forward- and side-facing cameras, radars, LIDAR, navigation sensors, and a computing and data storage unit integrated into the vehicle.1 Uber had also equipped the vehicle with an aftermarket camera system that was mounted in the windshield and rear window and that provided additional front and rear videos, along with an inward-facing view of the vehicle operator. In total, 10 camera views were recorded over the course of the entire trip.
The self-driving system relies on an underlying map that establishes speed limits and permissible lanes of travel. The system has two distinct control modes: computer control and manual control. The operator can engage computer control by first enabling, then engaging the system in a sequence similar to activating cruise control. The operator can transition from computer control to manual control by providing input to the steering wheel, brake pedal, accelerator pedal, a disengage button, or a disable button.“ [2]
There were several statements about the system and the role of the driver that concerned us. The first reported that the vehicle’s "advanced driver assistance functions" are disabled in autonomous driving mode. The second described the driver’s monitoring duties and responsibilities during emergencies as follows:
“According to Uber, the developmental self-driving system relies on an attentive operator to intervene if the system fails to perform appropriately during testing. In addition, the operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review.”[3]
This system's automatic driving mode is incompatible with known human performance limitations. Drivers actively monitoring a moving system's interface will be distracted and quickly lose situational awareness. These conditions are reflected in the driver behaviors captured by cameras just before impact. According to the NTSB:
“The inward-facing video shows the vehicle operator glancing down toward the center of the vehicle several times before the crash. In a postcrash interview with NTSB investigators, the vehicle operator stated that she had been monitoring the self-driving system interface. The operator further stated that although her personal and business phones were in the vehicle, neither was in use until after the crash, when she called 911.”[4]
Implications
The NTSB's preliminary report supports our conclusion that the Uber driver could not have prevented the fatal accident. We believe that the accident, under prevailing conditions, was essentially baked into the autonomous driving system's design.
The accident raises broader questions about the increasing use of automation in safety significant operations. Specifically, while smart technologies and automation are growing in capabilities, their implications for users and operators are not well understood. This gap in human factors understanding and application are increasing safety risks. In this context, it’s unlikely that the tragic Phoenix accident will remain a rare, isolated event.
References
[1] Ozzie Paez, Dean Macris, Who’s responsible for Uber’s self-driving vehicle accident, June 15, 2018, Ozzie Paez Research, https://www.ozziepaezresearch.com/single-post/2018/06/15/UberSelfDrivingVehicleAccident
[2] Preliminary Report Highway, HWY18MH010, National Transportation Safety Board (NTSB), adopted May 24, 2018, https://www.ntsb.gov/investigations/AccidentReports/Pages/HWY18MH010-prelim.aspx
[3] NTSB, Preliminary Report Highway, HWY18MH010.
[4] NTSB, Preliminary Report Highway, HWY18MH010.