Daily BulletinDaily Bulletin

The Conversation

  • Written by The Conversation
imageThe military robots in Marvel's Iron Man 2 might not be so far from reality.Marvel Studios/Paramount Pictures

In an open letter I helped publish on July 28 – which has now been signed by more than 2,700 artificial intelligence (AI) and robotics researchers from around the world – we stated that “starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control”.

A few days later, philosopher Jai Galliott challenged the notion of a ban, recommending instead that we welcome offensive autonomous weapons – often called “killer robots” – rather than ban them.

I was pleased to read Jai’s recommendation, even if he calls the open letter I helped instigate “misguided” and “reckless”, and even if I disagree with him profoundly.

This is a complex and multi-faceted problem, and it is worth considering his arguments in detail as they bring several important issues into focus.

Four points

Jai puts forward four arguments why a ban is not needed:

  1. No robot can really kill without human intervention

  2. We already have weapons of the kind for which a ban is sought

  3. The real worry is the development of sentient robots, and

  4. UN bans are virtually useless.

Let’s consider the claims in turn.

The first argument is that robots cannot kill without human intervention. This is false. The Samsung SGR-A1 sentry robot being used today in the Korean DMZ has an automatic mode. When in this mode, it will identify and kill targets up to four kilometres away without human intervention. If you are in the DMZ, it will track you and – unless you unambiguously raise your hands in surrender – it will kill you.

The Samsung SGR-A1 can run in autonomous mode, and has already been deployed in South Korea.

The second argument is that we already have weapons of the kind for which a ban is sought. To illustrate this, he mentions the Phalanx close-in weapon system used by the Australian Navy. This completely misses the point, as the Phalanx is a defensive weapon system. Our open letter specifically called only for a ban on offensive weapon systems. We have nothing against defensive weapons.

However, whether the weapons we seek to ban exist or not is irrelevant to our core argument that they ought to be banned. Anti-personnel mines existed before a ban was put in place with the Ottawa Treaty. And 46 million such mines have since been destroyed.

Blinding lasers had been developed by both China and the US before the UN ban was put in place in 1998. And blinding lasers are not in use in the Syria or any other battlefield around the world today.

So whether or not you believe offensive autonomous weapons already exist, it doesn’t undermine our our call for a ban.

The third argument is that the real worry is the development of sentient robots. This is also false. We do not discuss sentient weapons at all. Our call for a ban is independent of whether robots ever gain sentience.

Sentient robots like Hollywood’s Terminator would be a very bad thing. Even stupid AI in killer robots that are non-sentient would be a very bad thing. We need a ban today to protect mankind from swarms of armed quadcopters, technology that is practically on the shelves of hardware stores today.

The final argument claims UN bans are virtually useless. This also is false. The UN has very successfully banned biological weapons, space-based nuclear weapons, and blinding laser weapons. And even for arms such as chemical weapons, land mines, and cluster munitions, where UN bans have been breached or not universally ratified, severe stigmatisation has limited their use. UN bans are thus definitely worth having.

What’s the endpoint?

What I view as the central weakness of the arguments advanced in Jai’s article is that they never addresses the main argument of the open letter: that the endpoint of an AI arms race will be disastrous for humanity.

The open letter proposes a solution: attempting to stop the arms race with an arms control agreement.

The position Jai takes, on the other hand, suggests we should welcome the development of offensive autonomous weapons. Yet it fails to describe what endpoint this will lead to.

It also never attempts to explain why a ban is supported by thousands of AI and robotics experts, by the ambassadors of Germany and Japan, by the International Committee of the Red Cross, by the editorial pages of the Financial Times, and indeed (for the time being) by the US Department of Defense, other than with a dismissive remark about “scaremongering”.

Anybody criticising an arms-control proposal endorsed by such a diverse and serious-minded collection of people and organisations needs to explain clearly what endpoint they are proposing instead, and should not advance arguments against a ban that are either false or irrelevant to the issue.

Toby Walsh receives funding from the Australian Research Council, the Department of Communications, the Asian Office of Aerospace Research and Development and the Humboldt Foundation.

Authors: The Conversation

Read more http://theconversation.com/we-should-not-dismiss-the-dangers-of-killer-robots-so-quickly-45935

Students in Melbourne will go back to remote schooling. Here's what we learnt last time and how to make it better


Where are the most disadvantaged parts of Australia? New research shows it's not just income that matters


There's serious talk about a job guarantee , but it's not that straightforward


The Conversation


Scott Morrison Covid 19 update

PRIME MINISTER: Good afternoon, everyone. Today I’m joined by Professor Paul Murphy - sorry, Professor Paul Kelly. I’ve got Brendan Murphy still on the brain. You are not far from us, Brendan. B...

Scott Morrison - avatar Scott Morrison

Prime Minister Interview with Ben Fordham, 2GB

FORDHAM: Thank you very much for talking to us. I know it's a difficult day for all of those Qantas workers. Look, they want to know in the short term, are you going to extend JobKeeper?   PRI...

Scott Morrison - avatar Scott Morrison

Prime Minister Scott Morrison interview with Neil Mitchell

NEIL MITCHELL: Prime minister, good morning.    PRIME MINISTER: Good morning, how are you?   MICHELL: I’m okay, a bit to get to I apologise, we haven't spoken for a while and I want to get t...

Scott Morrison - avatar Scott Morrison

Business News

Fifth Dimension: Identified as one of the world’s leading strategic consultancies

Sydney based consulting company, Fifth Dimension, has been recognised for its ground breaking work, receiving a place in the GreenBook Research Industry Trends (GRIT) Top 25 Strategic Consultancie...

Tess Sanders Lazarus - avatar Tess Sanders Lazarus

Understanding Your NextGen EHR System and Features

NextGen EHR (Electronic Health Records) systems can be rather confusing. However, they can offer the most powerful features and provide some of the most powerful solutions for your business’s EHR ne...

Rebecca Stuart - avatar Rebecca Stuart

SEO In A Time of COVID-19: A Life-Saver

The coronavirus pandemic has brought about a lot of uncertainty for everyone across the world. It has had one of the most devastating impacts on the day-to-day lives of many including business o...

a Guest Writer - avatar a Guest Writer

News Company Media Core

Content & Technology Connecting Global Audiences

More Information - Less Opinion