Jim Dempsey, Senior Policy Adviser at Stanford Cyber Policy Center, discusses the proposal for a software liability regime to shift liability onto those who should be securing their software. Topics include legal theories of liability, process-based safe harbor, certification approach, defining software liability standards, design flaws and liability, and the need for quick action in policy-making.
The proposed software liability regime suggests a rules-based approach to establish per se liability for specific flaws, incentivizing developers to eliminate them and improve software security.
To address the complexity of software, a liability regime should also cover design flaws, adopting a defects analysis approach to determine liability for flaws that may not be explicitly listed but are considered unreasonably dangerous.
Deep dives
Defining the Floor: Minimum Standard of Care for Software
The proposed liability regime for software development starts with a rules-based approach to define a floor, which sets the minimum legal standard of care for software. This floor focuses on specific product features or behaviors that should be avoided, such as default passwords, path traversal, and buffer overflow. By identifying these known weaknesses and flaws commonly exploited by attackers, liability can be attached if a product includes these flaws. The goal is to create per se liability for these specific flaws, incentivizing developers to eliminate them and improve software security.
Analyzing Design Flaws for Liability
To address the complexity and dynamic nature of software, a focus solely on a list of coding weaknesses or flaws is not sufficient. A liability regime should also cover design flaws that are not easily captured in a specific list. Drawing on the principles of products liability law, a defects analysis approach can be adopted. This approach looks at whether a design flaw is actually exploited and causes actionable damage. Case-by-case adjudication is necessary to determine liability for design flaws that may not be explicitly listed, but are still considered unreasonably dangerous.
Safe Harbor and Process-Focused Liability
To ensure that liability is not unlimited or unpredictable, a safe harbor is proposed for developers who follow robust coding practices. This safe harbor shields developers from liability for hard-to-detect flaws that go beyond the floor of specific weaknesses. The emphasis here is on following secure software development processes, such as conducting fuzz testing, static analysis of code, and maintaining provenance information. By adhering to these practices, developers can benefit from the safe harbor protection, encouraging the adoption of more secure coding practices.
Incremental Progress and Balancing Perfection
The proposed liability regime aims for incremental progress in software security, acknowledging that perfection is not attainable. The focus is on making meaningful improvements and addressing clear flaws that are already known. By starting with a floor of specific weaknesses, incentivizing their elimination, and providing a safe harbor for robust coding practices, the goal is to create a liability framework that drives developers to produce more secure software. The emphasis is on taking action to improve software security rather than waiting for a perfect solution.
Software liability has been dubbed the “third rail of cybersecurity policy.” But the Biden administration’s National Cybersecurity Strategy directly takes it on, seeking to shift liability onto those who should be taking reasonable precautions to secure their software.
Lawfare Senior Editor Stephanie Pell sat down with Jim to discuss his proposal. They talked about the problem his paper is seeking to solve, what existing legal theories of liability can offer a software liability regime and where they fall short, and his three-part definition for software liability that involves a rules-based floor and a process-based safe harbor.