Why courts need ‘explainable AI’ when self-driving cars break down

The first serious accident of a self-driving car occurred in Australia in March of this year. A pedestrian suffered life-threatening injuries when Hit by Tesla Model 3which the driver claimed was in “autopilot” mode.

In the United States, a highway safety regulator is investigating a series of accidents A Teslas on autopilot crashes into first responder vehicles with flashing lights during a traffic stop.

Car accident on the highway at night with emergency lights flashing
A Tesla Model 3 collided with a stationary emergency response vehicle in the United States.
NBC/Youtube

Decision-making processes in cars are often ‘self-driving’ vague and unpredictable (even for their manufacturers), so it can be hard to decide who should take responsibility for such accidents. However, the growing field of “explainable artificial intelligence” may help provide some answers.

Who is responsible when self-driving cars crash?

Even though self-driving cars are new, they are still machines that manufacturers make and sell. When they cause harm, we must ask whether the manufacturer (or software developer) has fulfilled its safety responsibilities.

The modern law of negligence comes from the case of Donoghue vs StevensonA woman discovered a decaying snail in a bottle of ginger beer. The manufacturer was found neglectful, not because he was expected to predict or directly control the snail’s behaviour, but because his packaging process was unsafe.

According to this logic, manufacturers and developers of AI-based systems such as self-driving cars may not be able to predict and control everything.”Self-confident“, but they can take measures to reduce risks. If their risk management, testing, auditing and control practices are not good enough, they should be held accountable.

How Much Risk Management Is Adequate?

The difficult question would be “how much care and how much risk management is adequate?” In complex software, it is It is impossible to test it for every error advance. How will developers and manufacturers know when to stop?

Fortunately, courts, regulators, and technical standards bodies have experience in setting standards of care and liability for risky but useful activities.

Standards may be very strict, such as EU standards artificial intelligence organization project, which requires minimizing risks “as much as possible” regardless of cost. Or it could be more like the Australian Neglect Act, which allows for less strict management of less likely or less serious risks, or where risk management reduces the overall benefit of a risky activity.

Legal issues will be complicated by the opacity of artificial intelligence

Once we have a clear standard of risk, we need a way to enforce it. One approach could be to give a regulator powers to impose penalties (as the ACCC does in competition cases, for example).

Affected individuals AI Systems must also be able to sue. In cases related to self-driving cars, lawsuits against manufacturers will be especially important.

However, for these lawsuits to be effective, courts will need to understand the technical processes and standards of AI systems in detail.

Manufacturers often prefer not to disclose these details for commercial reasons. But the courts already have procedures in place to balance business interests with an appropriate amount of disclosure to facilitate litigation.

An even greater challenge may arise when AI systems themselves are opaque.”black boxesFor example, the autopilot function of a Tesla is based on “deep neural networks‘, a common type of AI where developers can’t be completely sure how or why it came to a certain conclusion.

‘Explainable AI’ to the rescue?

Opening the black box of modern artificial intelligence systems is the focus of the file the new wave Computer science and humanities worldsThe so-called “explainable artificial intelligence” movement.

The goal is to help developers and end users understand how AI systems make decisions, either by changing how systems are built or by creating post-factual interpretations.

in classic exampleThe AI ​​system mistakenly classifies the image of a husky as a wolf. The “Explainable AI” method reveals the system focusing on the snow in the background of the image, rather than the animal in the foreground.

(Right) Photo of a husky dog ​​against a snowy background.  (on the left) shows a method

How this is used in a lawsuit will depend on various factors, including the specific AI technology and the damage caused. The main concern will be the extent of access of the affected party to the AI ​​system.

Trivago case

our new search Analysis of a major Australian court case recently offers an encouraging glimpse of what this could look like.

In April 2022, a federal court sanctioned global hotel reservation company Trivago with $44.7 million for misleading customers about hotel room rates on its website and in television advertisements, following a case brought by the company. ACCC Competition Monitor. An important question was how Trivago’s complex ranking algorithm chose the highest rated view of hotel rooms.

The Federal Court has set rules for discovery with safeguards to protect Trivago’s intellectual property, and both the ACCC and Trivago have called expert witnesses to provide evidence explaining how Trivago’s AI system works.

Even without full access to the Trivago system, an ACCC expert witness was able to provide convincing evidence that the system’s behavior was inconsistent with Trivago’s claim to give customers the “best price”.

This shows how tech experts and lawyers together can overcome the ambiguity of AI in court cases. However, the process requires close collaboration and deep technical expertise, and is likely to be expensive.

Regulators can take steps now to simplify things in the future, such as requiring AI companies to appropriately document their systems.

The road ahead

Vehicles with Different degrees of automation They are becoming more and more popular, and fully autonomous taxis and buses are being tested in Australia And the overseas.

Keeping our roads as safe as possible will require close collaboration between AI and legal experts, and regulators, manufacturers, insurers and users will have roles to play.Conversation

This article by Aaron J. SnowswellPostdoctoral Research Fellow, Computational Law and Artificial Intelligence Accountability, Queensland University of Technology; Henry FraserResearch fellow in Law, Accountability and Data Science, Queensland University of TechnologyAnd the Rail SimcockPhD candidate Queensland University of Technology Reposted from Conversation Under a Creative Commons License. Read the original article.