As self-driving cars and AI systems become more prevalent, questions of responsibility, transparency, and regulation are intensifying, prompting calls for clearer standards inspired by the evolution of vehicle safety laws.

A novel’s opening image, a self-driving car ploughing into traffic, captures a legal and moral knot that has only tightened as automation spreads across transport and other parts of life. High-profile lawsuits and investigations have repeatedly forced the question of blame into the courtroom: juries and regulators are now weighing whether manufacturers, software designers or the human occupants bear responsibility when partially autonomous systems fail. According to reporting on recent cases, manufacturers have been found partly liable for crashes involving assisted driving systems, prompting large damage awards and renewed calls for clearer regulation. (Sources: AP; Time) Designers and vendors are wrestling with the limits of their products even as they rush them to market. Regulators and safety investigators have criticised inadequate safeguards in many early deployments of driver-assist and self-driving technology, and lawsuits allege that marketing sometimes overstates capability while failing to make operational constraints clear to users. Industry incidents have also revealed failures of transparency and reporting that regulators say must be addressed before broader roll-outs resume. (Sources: AP; Axios; AP) That legal friction is instructive for policymakers debating how to govern more general-purpose artificial intelligence. Past tech regulation has favoured a distributed model of accountability: governments set rules and standards, manufacturers must certify compliance, and users face conditional privileges and duties. But AI complicates that bargain because models are updated continuously, embedded across services and often opaque even to their creators. Recent enforcement actions against autonomous vehicle firms illustrate the difficulty of applying traditional compliance frameworks to software that learns and shifts over time. (Sources: AP; AP) The role of the human operator remains central in many failure narratives. Investigations of crashes involving assisted-driving modes repeatedly show inattentive or distracted people in control at the moment of impact, underscoring that delegating responsibility to software without adequate human–machine interfaces or clear operational limits creates risk. Those patterns underline why any regulatory approach to AI must combine obligations on creators with measures that make deployment conditions and user responsibilities explicit and enforceable. (Sources: AP; Axios) History offers a useful analogy. Automobiles were once treated with laissez-faire optimism until mounting deaths forced the creation of licensing regimes, safety standards and a culture of regulated behaviour. That shared framework, standards for vehicles, obligations on makers and rules for drivers, did not eliminate harm but it distributed duties in ways that reduced it. Policymakers should study that evolution while recognising AI’s added complexity: unlike cars, models can be copied, fine-tuned and redeployed globally in hours. (Sources: Time; Axios) Practical steps flow from these lessons. Greater transparency about capabilities and limits, meaningful independent testing and enforced incident reporting would make harms easier to spot and remedy. Voluntary trust marks that signal human-authored content or audited systems can help consumers, but experience with automated transport shows voluntary measures alone often fall short; regulatory teeth and litigation incentives have proven decisive in driving corporate change. (Sources: AP; AP) Responsibility for the harms and benefits of AI will ultimately be shared across designers, deployers, regulators and users. As one of the characters in Bruce Holsinger’s Culpability observes, "Als are not aliens from another world" "They are things of our all-too-human creation. [They] will only be as moral as we design them to be. Our morality in turn will be shaped by what we learn from them and how we adapt accordingly." Until societies set clearer limits and accountabilities, widespread deployment is a licence to err rather than a guarantee of progress. (Sources: AP; AP)

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph: - Paragraph 1: [2], [7] - Paragraph 2: [6], [4], [5] - Paragraph 3: [5], [2] - Paragraph 4: [3], [6] - Paragraph 5: [6], [7] - Paragraph 6: [2], [5] - Paragraph 7: [2], [4]

Source: Noah Wire Services

Verification / Sources

  • https://www.theguardian.com/commentisfree/2026/mar/11/ai-artificial-intelligence-regulation-society - Please view link - unable to able to access data
  • https://apnews.com/article/c342f2716b1ec4e9ede09b8e958751b7 - In August 2025, a Miami jury found Tesla partly responsible for a fatal 2019 crash involving its Autopilot system, awarding over $240 million in damages. The incident highlighted concerns about the safety and marketing of semi-autonomous driving technologies, prompting discussions on the legal responsibilities of manufacturers and the need for clearer regulations in the autonomous vehicle industry.
  • https://apnews.com/article/bc96e19cad9b5e2de5d13846b0f744c7 - In January 2024, a Tesla operating in 'self-drive' mode collided with a parked police vehicle in California. The driver admitted to using the system and being distracted by his phone. This incident underscores the importance of driver attentiveness and the limitations of current autonomous driving technologies, raising questions about the adequacy of existing safety measures and regulations.
  • https://apnews.com/article/688d6a7bf3d4ed9d5292084b5c7ac186 - In 2023, a lawsuit was filed against Tesla following a fatal crash in Colorado involving a Model 3 operating on Autopilot. The suit alleges that Tesla's marketing of its partially automated driving system is misleading and that Autopilot contributed to the driver's death by failing to maintain control of the vehicle, highlighting concerns about the safety and transparency of autonomous vehicle technologies.
  • https://apnews.com/article/836c944e5bb7a877a302e135bd90007d - In 2024, General Motors' autonomous vehicle division, Cruise, agreed to pay a $1.5 million fine for failing to fully report a crash involving a pedestrian. The incident led to the suspension of Cruise’s driverless operations in San Francisco, emphasizing the need for stringent reporting and regulatory compliance in the development and deployment of autonomous vehicles.
  • https://www.axios.com/2017/12/15/teslas-safeguards-lacking-in-self-driving-car-crash-1513305444 - A 2016 investigation by the National Transportation Safety Board revealed that Tesla's safeguards were insufficient in a fatal crash involving a partially automated vehicle. The incident underscored the limitations of self-driving technology and the necessity of human attentiveness, highlighting the need for more rigorous safety measures and oversight by manufacturers.
  • https://time.com/5205767/uber-autonomous-car-crash-arizona/ - In March 2018, an autonomous Uber vehicle fatally struck a woman in Tempe, Arizona, marking the first recorded pedestrian death involving a self-driving car. The incident led to Uber suspending its self-driving vehicle operations in multiple cities, bringing renewed scrutiny to the safety of autonomous vehicle technology and the regulatory frameworks governing their deployment.

Noah Fact Check Pro

The draft above was created using the information available at the time the story first emerged. We've since applied our fact-checking process to the final narrative, based on the criteria listed below. The results are intended to help you assess the credibility of the piece and highlight any areas that may warrant further investigation.

Freshness check

Score: 10

Notes: The article was published on 10 March 2026, making it highly current. No evidence of recycled or outdated content was found. The narrative presents original analysis without significant overlap with other recent publications.

Quotes check

Score: 9

Notes: The article includes direct quotes from Bruce Holsinger's book 'Culpability' and references to other sources. While the quotes are attributed, the absence of direct links to the original sources makes independent verification challenging. The lack of direct access to the original sources raises concerns about the verifiability of the quotes.

Source reliability

Score: 10

Notes: The article is published by The Guardian, a reputable major news organisation known for its journalistic standards. The sources cited, including Bruce Holsinger's book and other publications, are credible. However, the reliance on secondary references without direct access to the original materials slightly diminishes the overall reliability.

Plausibility check

Score: 9

Notes: The claims made in the article align with current discussions on AI regulation and its societal impacts. The references to Bruce Holsinger's book and other sources are plausible and relevant. However, the lack of direct access to the original sources for some claims introduces a degree of uncertainty.

Overall assessment

Verdict (FAIL, OPEN, PASS): PASS

Confidence (LOW, MEDIUM, HIGH): MEDIUM

Summary: The article is current and published by a reputable source. While the claims are plausible and supported by credible references, the lack of direct access to some original sources for verification introduces a degree of uncertainty. The overall assessment is positive, but the medium confidence reflects the need for caution due to the verification challenges identified.