The autonomous killing systems of the future are already here, they're just not necessarily weapons – yet
- Written by The Conversation
When the discussion of “autonomous weapons systems” inevitably prompts comparisons to Terminator-esque killer robots it’s perhaps little surprise that a number of significant academics, technologists, and entrepreneurs including Stephen Hawking, Noam Chomsky, Elon Musk, Demis Hassabis of Google and Apple’s Steve Wozniak signed a letter calling for a ban on such systems.
The signatories wrote of the dangers of autonomous weapons becoming a widespread tool in larger conflicts, or even in “assassinations, destabilising nations, subduing populations and selectively killing a particular ethnic group”. The letter concludes:
The endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting.
It’s hard to quibble with such concerns. But it’s important not to reduce this to science-fiction Terminator imagery, narcissistically assuming that AI is out there to get us. The debate has more important human, political aspects that should be subjected to criticism.
The problem is that this is not the endpoint, as they write; it is the starting point. The global artificial intelligence arms race has already begun. The most worrying dimension of which is that it doesn’t always look like one. The difference between offensive and defensive systems is blurred just as it was during the Cold War – where the doctrine of the pre-emptive strike, for example, that attack is the best defence, essentially merged the two. Autonomous systems can be reprogrammed to be one or the other with relative ease.
Autonomous systems in the real world
The Planetary Skin Institute and Hewlett-Packard’s Central Nervous System for the Earth (CeNSE) project are two approaches to creating a network of intelligent remote sensing systems that would provide early warning for such events as earthquakes or tidal waves – and automatically act on that information.
Launched by NASA and Cisco Systems, the Planetary Skin Institute strives to build a platform for planetary eco-surveillance, capable of providing data for scientists but also for monitoring extreme weather, carbon stocks, actions that might break treaties, and for identifying all sorts of potential environmental risks. It’s a good idea – yet the hardware and software, design and principles for these autonomous sensor systems and for autonomous weapons is essentially the same. Technology is ambivalent to its use: the internet, GPS satellites and many other systems used widely today were military in origin.
As an independent non-profit, the Planetary Skin Institute’s goal is to improve lives through its technology, claiming to provide a “platform to serve as a global public good” and to work with others to develop other innovations that could help in the process. What it doesn’t mention is the potential for the information it gathers to be immediately monetised, with real-time information from sensors automatically updating worldwide financial markets and triggering automatic buying and selling of shares.
The Planetary Skin Institute’s system offers remote, automated sensing systems providing real-time, tele-tracking data worldwide – its slogan is “sense, predict, act” – the same sort of principle, in fact, on which an AI autonomous weapon systems would work. The letter describes AI as a “third revolution in warfare, after gunpowder and nuclear arms”, but the capacity to build such AI weapons has been around since at least 2002, when drones transitioned from remote-control aircraft to smart weapons, able to select and fire upon their own targets.
The future is now
Instead of speculating about the future, we should deal with the legacy of autonomous systems from the Cold War, inherited from World War II and Cold War-era complexes between university, corporate and military research and development. DARPA, the US Defence Advanced Research Projects Agency is a legacy of the Cold War, founded in 1958 but still pursuing a very active high-risk, high-gain model for speculative research.
Research and development innovation spreads to the private sector through funding schemes and competitions, essentially the continuation of Cold War schemes through private sector development. The “security industry” is already tightly structurally tied to government policies, military planning and economic development. To consider banning AI weaponry is to point out the wider questions around political and economic systems that focus on military technologies because they are economically lucrative.
Relating the nuclear bomb to its historical context, the author EL Doctorow said: “First, the bomb was our weapon. Then it became our foreign policy. Then it became our economy.” We must critically evaluate the same trio as they affect autonomous weapons development, so that we discuss this inevitability not by obsessing on the technology but on the politics that allows and encourages it.
The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
Authors: The Conversation