Daily Bulletin

  • Written by The Conversation Contributor
imageTesla has already gone beyond demonstrating its self-driving car to having such a vehicle travel across the United States.Reuters/Beck Diefenbach

Much of the conversation about killer machines has understandably focused on unmanned military vehicles. Yet civilian robotic vehicles also present ethical and legal questions. These have been highlighted by Tesla Corp’s recent crossing of the continental United States in a car that largely drove itself and by the first trial in Australia, involving a Volvo SUV, on Saturday.

It’s worth noting that the artificial intelligence (AI) software that allowed the autopiloted journey across the US is freely available as an OS update to any Tesla car manufactured after September 2014. This means cars with this capability are already available to Australian consumers.

Until now, such robotic vehicles have largely been used for non-consumer or testing purposes. That is not least because of the risks they create when driven on public roads.

A human controller who supervised the Tesla journey, Alex Roy, reported:

There were probably three or four moments where we were on autonomous mode at [145 kilometres] an hour … If I hadn’t had my hands there, ready to take over, the car would have gone off the road and killed us.

The fault, Roy said, was his “for setting a speed faster than the system’s capable of compensating”.

Human error is perhaps the most problematic issue facing autonomous cars. That applies not only to drivers but other road users. Roads are hardly isolated places, nor are they restricted to car use.

AI engineers point out that this makes designing a fail-safe system nearly impossible. Humans are unpredictable, even (especially) behind a wheel. It is in this respect that things become much more challenging philosophically and legally. If the Tesla car had “gone off the road” at 145kmh, it might not have just killed those on board, but others in its path (not to mention damage to animals and property).

Tesla, meet the trolley problem

Philosopher Philippa Foot’s thought experiment, the “trolley problem”, is relevant here. The problem posed is whether it is acceptable to divert a trolley-car that is careering towards five unsuspecting people, who will inevitably be killed, on the understanding that diverting the trolley will result in the death of only one person.

Judith Thomson later posited a supplementary scenario:

As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by putting something very heavy in front of it. As it happens, there is a very fat man next to you – your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?

People commonly respond that they would divert the trolley, but not push the fat man. It’s largely a question of the form of direct and indirect action we take and the proximity between the act and the result – the “causal chain”, as lawyers call it.

Yet both decisions have the same result and probably carry the same legal consequences.

In 2008, my co-author and I raised the trolley problem in the context of (unmanned) cars. We asked readers to consider:

… a child on a bicycle darting out onto a busy suburban road. The human driver automatically swerves to miss the child, but in doing so hits a school bus, causing more fatalities than if they had continued on their ordinary path and hit the child on the bike.

Clearly, that decision involves a reaction, not a direct action. Unlike the trolley problem, the driver could not weigh up both options properly or exercise real, prospective choice. This would mean the legal consequences would be different (not murder or manslaughter but more likely negligence).

If the matter had gone to court, the legal issue would then have been what a “reasonable ordinary driver” would have done in those circumstances. That would have taken into account ordinary, instinctive, human reactions in a sudden, high-stress situation. Perhaps both choices would have been legally acceptable because we’re more forgiving about decisions in retrospect.

Decisions made in advance alter the legal calculus

A robotic vehicle in the same situation is much more akin to the trolley problem, because humans have to make the decision well in advance. Engineers program an autonomous vehicle to drive with all the variables that entails, and to act in specific ways depending on those variables.

That programming must necessarily take into account how to act when something (including a child) darts out onto the road. It must also recognise that a sufficiently powerful computer system would be able to evaluate the various options in milliseconds and, if unable to avoid casualties, perhaps choose the path of least destruction.

This means that philosophical conundrums like the trolley problem become legal ones, because now we have programmable computers and not unpredictable humans at the helm.

The Tesla situation above can be expressed as follows: [Human Error] + [Computer decision/fault] = [Risk to humans]. Engineers will need to address that [Computer decision/fault] so as to cancel out [Risk to humans] They will also have to consider multiple [risks to humans] permutations and compensate for those too.

imageIf a self-driving car is programmed to respond in a way that leads to death or injury, questions of legal responsibility can be complicated.Reuters/Beck Diefenbach

Given it’s foreseeable that an AI speeding car may injure someone in its path, how should we program it to behave? If it is likely to cause injury on more than one possible course, which one should it take?

If we refuse to address these questions, are we still responsible by omission because we foresaw the problem and did nothing? And just who is responsible? The driver? The engineer? The programmer? The company that produces the car? Just when was the decision to, metaphorically, pull the lever made?

All these questions require legal direction and guidance on how robotic cars should react to a range of possible situations. This applies not in a retrospective, reactionary way, but from the prospective active decision-making matrix of the trolley problem. Hence, we argued:

Should legislators not choose to set out rules for such eventualities, someone will have to, or at least provide the AI with sufficient guidance to make such decisions by itself. One would expect that the right body to make such value judgements would be a sovereign legislative body, not a software engineer.

Still waiting for legislators to respond

imageIn the absence of legislative responses, it has been left to Tesla’s Elon Musk and his engineers to weigh up the ethical and legal dilemmas.Reuters/Rashid Umar Abbasi

That was 2008. To date, little has been done to address those problems. They have been left to software engineers and company directors like Elon Musk.

Some jurisdictions, particularly in the US, have begun to examine the safety of unmanned vehicles on the roads, but certainly not at this level. Australia lags significantly behind, despite the availability of Tesla hardware and software.

Most of our national regulatory focus has been on military applications of unmanned vehicles and, to a lesser extent, aerial regulation of drones. Road laws, which are the general province of the states and territories, are largely untouched. The general legal proposition that a human must be “in control” of a vehicle continues to apply, basically limiting the use of autopiloted cars.

This position is unlikely to be sustained. Apple, Google, Audi and Nissan, among others are rushing to bring autonomous cars to market. Technology-hungry Australians will want them.

Legislatures need to act, and the public needs to deliberate on appropriate regulatory action. The conversation about the ethical and legal use of unmanned civilian vehicles needs to start now.

Brendan Gogarty does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Authors: The Conversation Contributor

Read more http://theconversation.com/killer-robots-hit-the-road-and-the-law-has-yet-to-catch-up-49735

Business News

A Guide to Finance Automation Software

When running a business, it is critical to streamline certain processes to maintain efficiency. Too much to spent manually on tasks can wind up being detrimental to the overall health of the organis...

Daily Bulletin - avatar Daily Bulletin

Top Tips for Cost-effective Storefront Signage

The retail industry is highly competitive and if you are in the process of setting up a retail store, you have come to the right place, as we offer a few tips to help you create a stunning storefront...

Daily Bulletin - avatar Daily Bulletin

How Freight Forwarding Simplifies Global Trade Operations

Global trade operations are becoming increasingly complex due to international regulations, customs procedures, and the sheer scale of global logistics. For businesses looking to expand internation...

Daily Bulletin - avatar Daily Bulletin