The rise of self-driving cars1 has fundamentally disrupted traditional notions of fault in traffic incidents. Where once the actions of human drivers determined liability, today the “driver” is often software, sensors, and algorithms. As deployments expand in cities like Los Angeles, Miami, Phoenix, and San Francisco, courts, regulators, and insurers grapple with assigning responsibility when a self-driving car crashes or commits a traffic violation. Absent a comprehensive federal liability statute, the default remains patchwork state law and conventional tort theories. This GT Advisory examines the core legal issues, recent developments, and the potential path forward in self-driving liability, federal and state regulation, insurance coverage, and the evolving legal standards governing self-driving cars and corporate accountability.
From Human Negligence to Product Liability
Under conventional tort law, automobile accidents involving driver error not attributable to product defects trigger negligence claims – where an injured plaintiff must prove duty, breach, causation, and damages. The at-fault driver (or their insurer) typically bears responsibility. Self-driving cars upend this paradigm. When no human is in control (or even supervising), liability may shift toward manufacturers, software developers, fleet operators, and component suppliers under strict or negligence-based product-liability theories.
Plaintiffs in cases involving self-driving cars may argue design defects (e.g., algorithms failing to predict pedestrian behavior), manufacturing defects (e.g., faulty sensors), or failure-to-warn (inadequate disclosure of operational design domain limitations). Courts increasingly treat the ADS as the “product,” subjecting it to the same standards applied to defective brakes or airbags. Depending on the SAE level of automated driving, multiple parties may share fault. In Level 4 deployments (where the vehicle handles all driving tasks within specific operating zones without human intervention), the vehicle OEM (original equipment manufacturer), the ADS provider, or the remote operator could be found to be at fault. Level 3 systems (where the vehicle drives itself, but a human must remain ready to intervene) may create murky handoff scenarios. For example, if the system issues a takeover request too late or the human fails to respond, both manufacturer and “driver” may face exposure. Some automakers have retreated from Level 3 precisely because of this liability risk.
The Federal Landscape: Safety First, Liability Later
Congress has yet to enact a dedicated federal liability statute for self-driving cars. The National Highway Traffic Safety Administration (NHTSA) mandates crash reporting and investigates incidents, but leaves fault allocation to courts and states.
The SELF DRIVE Act of 2026 (H.R. 7390), introduced Feb. 5, 2026, by Rep. Robert E. Latta (R-OH) and advanced through subcommittee on a party-line vote of 12 to 11, represents the most significant federal effort to date. It would strengthen NHTSA’s authority, require manufacturers to submit detailed “safety cases,” update Federal Motor Vehicle Safety Standards (FMVSS) for driverless designs, and preempt conflicting state manufacturing bans. Crucially, however, the bill would not create new liability rules or shield compliant manufacturers from common-law suits. Its focus remains on safety certification and national uniformity, not tort reform. Changes to the bill are likely to be made in order for it to advance through the full committee. As of this writing, the measure remains in the House Energy & Commerce Committee. The Senate’s Stay in Your Lane Act, introduced by Sens. Edward Markey (D-Mass.) and Richard Blumenthal (D-Conn.) in December 2025, contrasts with the SELF DRIVE Act in that it emphasizes safety and operational limits as opposed to rapid deployment and industry growth.
Without federal preemption on liability, plaintiffs continue to file product liability claims based on state laws. NHTSA’s March 2026 public meeting underscored ongoing data-collection priorities but offered no new fault-allocation guidance.
State-Level Accountability: California’s Bold Experiment
States have filled the vacuum with innovative measures. California, home to the largest self-driving car testing fleets, leads the way. Effective July 1, 2026, CA Assembly Bill 1777 (Vehicle Code § 38752) authorizes peace officers to issue “Notices of Autonomous Vehicle Noncompliance” directly to manufacturers or operators for any observed Vehicle Code or local traffic ordinance violation committed while the ADS is engaged. Infractions such as illegal U-turns, blocking intersections, or passing stopped school buses may now trigger corporate accountability rather than citations to nonexistent drivers. The law also mandates two-way voice communication devices for remote operators, a dedicated emergency-response hotline, and the ability to geofence and relocate vehicles within two minutes of law-enforcement requests. These notices feed into DMV permitting decisions, incentivizing safety improvements.
Several states have imposed heightened insurance minimums on companies producing self-driving cars, for example, the requirement for over $5 million in coverage for driverless testing in California, Nevada, and Arizona and the amount-unspecified but heightened “commercial-level coverage” required in Texas, Florida, and Illinois.
Insurance, Practical Challenges, and the Road Ahead
Liability’s migration from drivers to corporations has the potential to reshape insurance markets. Personal auto policies may shrink while commercial product liability and cyber policies may expand. Fleet operators now carry multimillion-dollar coverage; some insurers partner with AV firms for usage-based risk modeling. Repair costs can soar due to specialized sensors, further complicating claims.
Key unresolved issues persist: the “trolley problem” means that ethical dilemmas rarely reach court but influence safety-case design; data privacy in black-box recorders; and cybersecurity vulnerabilities that could create novel negligence claims. Scholars debate introducing strict liability or a “reasonable computer driver” standard akin to the reasonable human driver test. Some courts have already begun to weigh in on this issue, with some rulings imposing strict liability, but legislatures have not yet adopted either standard.
The outlook remains uncertain. Passage of a robust SELF DRIVE Act or successor legislation could bring uniformity on safety standards while leaving liability to evolving case law. Until then, manufacturers face high-stakes litigation risk, states experiment with direct enforcement tools, and victims navigate complex multi-defendant lawsuits. For Californians, AB 1777’s July 2026 rollout will mark a tangible shift: violations once ignored may now trigger corporate citations and heightened scrutiny.
In sum, self-driving technology promises safer roads by eliminating human error, yet the law lags. Liability determinations may continue to be litigated case-by-case under familiar product-liability principles until Congress or the states craft clearer rules. Stakeholders such as engineers, insurers, policymakers, and trial lawyers must collaborate to balance innovation with accountability. The next major crash or legislative milestone could accelerate that evolution; until then, driverless does not mean liability-less.
1 Formally autonomous vehicles (AVs) equipped with automated driving systems (ADS) at Society of Automotive Engineers (SAE) Levels 3 and above. SAE levels range from Level 0 (no driving automation) to Level 5 (full driving automation), defining how much control the vehicle has versus the driver.