Abstract
Current "safe enough" Autonomous Vehicle (AV) metrics focus on overall safety outcomes such as net losses across a deployed vehicle fleet using driving automation, compared to net losses assuming outcomes produced by human driven vehicles. While such metrics can provide an important report card to measure the long-term success of the social choice to deploy driving automation systems, they provide weak support for near-term deployment decisions based on safety considerations. Potential risk redistribution onto vulnerable populations remains problematic, even if net societal harm is reduced to create a positive risk balance. We propose a baseline comparison of the outcome expected in a crash scenario from an attentive and unimpaired "reasonable human driver," applied on a case-by-case basis, to each actual loss event proximately caused by an automated vehicle. If the automated vehicle imitates the risk mitigation behaviors of the hypothetical reasonable human driver, no liability attaches for AV performance. Liability attaches if AV performance did not measure up to the expected human driver risk mitigation performance expected by law. This approach recognizes the importance of tort law to incentivize developers to continually work on minimizing driving negligence by computer drivers, providing a way to close gaps left by purely statistical approaches.