Daily Bulletin

  • Written by The Conversation
imageI didn't do it!Jiuguang Wang/flickr, CC BY-SA

Robots' involvement in human deaths is nothing new. The recent death of a man who was grabbed by a robot and crushed against a metal plate at a Volkswagen factory in Baunatal, Germany, attracted extensive media attention. But it is strikingly similar to one of the first recorded case of a death involving an industrial robot 34 years ago.

These incidents have happened before and will happen again. Even if safety standards continue to rise and the chance of an accident happening in any given human/robotic interaction goes down, such events will become more frequent simply because of the ever-increasing number of robots.

This means it is important to understand this kind of incident properly, and a key part of doing so is using accurate and appropriate language to describe them. Although there is a sense in which it is legitimate to refer to the Baunatal incident as a case of “robot kills worker”, as many reports have done, it is misleading, verging on the irresponsible, to do so. It would be much better to express it as a case of “worker killed in robot accident”.

Admittedly, putting it that way isn’t as eye-grabbing, but that’s precisely the point. The fact is, robots, despite what one might be encouraged to believe from sci-fi, and despite what may happen in the far future, currently lack what we consider real intentions, emotions and purposes. And contrary to recent alarmist claims, nor are they going to acquire those capacities in the near future.

They can only “kill” in the sense that a hurricane (or a car, or a gun) can kill. They can’t kill in the sense that some animals can, let alone in the human sense of murder. Yet murder is likely to be what springs to most people’s minds when they read “robot kills worker”.

High stakes

Insisting on getting this language right isn’t an academic exercise in pedantry. The stakes are high. For one thing, an unwarranted fear of robots could lead to another unnecessary “artificial intelligence winter”, a period where the technology ceases to receive research funding. This would delay or deny the considerable benefits robots can bring not just to industry but society in general.

But even if you’re not optimistic about the benefits of robots, you should still want to get this issue right. Since robots don’t have responsibility, humans are the ones responsible for what robots do. However, as robots become more prevalent, it will increasingly appear as if they actually have their own autonomy and intentions, for which it will seem they can and should be held responsible.

imageMeet your new colleagueShutterstock

Although there may eventually come a day when that appearance is matched by reality, there will be a long period of time (which has already begun) when this appearance will be false. Even now we are already tempted to categorise our interactions with robots into what we are responsible for and what they are responsible for. This raises the danger of scapegoating the robot, and failing to hold the human designers, deployers and users involved fully responsible.

Moral robots or morally made robots?

It’s not just those reporting on robots that need to get the language right. Policymakers, salespeople, and those in research and development who are designing the robots of today and tomorrow need to keep a clear head. Instead of asking “what’s the best way to make moral robots?”, we should ask “what’s the best way to morally make robots?”.

This subtle change in the language, if adopted, would result in big changes in design. For example, trying to give robots moral laws to follow would require us to provide them with a human-like level of common sense to apply those laws, something that would be far harder. Instead of following such a design dead end we could aim for machines that are a results of the designers' own morals, just as we try to ethically design non-robotic technology.

In the Volkswagen accident, a company spokesperson reportedly said “initial conclusions indicate that human error was to blame, rather than a problem with the robot”. Other reports spoke of it being human error rather than the robot “being at fault” or “accountable”. This implies that, in other circumstances, the robot could have been considered to blame for the accident.

If there was a “problem with the robot”, be it faulty materials, a misperforming circuit board, bad programming, poor design of installation or operational protocols, that problem – or not anticipating it – would still have been due to human error. Yes, there are industrial accidents where no human or group of humans is to blame. But we mustn’t be tempted by the appearance of agency in robots to absolve their human creators of responsibility. Not yet anyway.

Ron Chrisley received funding between 2009-2014 from the European Commission to help coordinate EUCognition, a network of European Researchers in Cognitive Systems. He is currently an acting director of the European Society for Cognitive Systems.

Authors: The Conversation

Read more http://theconversation.com/robots-cant-kill-you-claiming-they-can-is-dangerous-44208

Business News

Top Tips for Cost-effective Storefront Signage

The retail industry is highly competitive and if you are in the process of setting up a retail store, you have come to the right place, as we offer a few tips to help you create a stunning storefront...

Daily Bulletin - avatar Daily Bulletin

How Freight Forwarding Simplifies Global Trade Operations

Global trade operations are becoming increasingly complex due to international regulations, customs procedures, and the sheer scale of global logistics. For businesses looking to expand internation...

Daily Bulletin - avatar Daily Bulletin

How Car Accident Lawyers Protect Your Rights?

In the aftermath of a car accident, the steps you take can significantly impact your financial and legal future. This is where car accident lawyers step into the frame, equipped with expertise to sa...

Daily Bulletin - avatar Daily Bulletin