Autonomous Decision-Making

Sorting fiction from reality: How much autonomy should machines be given on the battle field to make decisions without human intervention?

Anita Hawser
24 November 2017
 
The PD-100 Black Hornet mini-UAV used by the British Army to see round corners and inside compounds is remotely piloted. But what level of autonomy should unmanned systems be given in future to make decision on their own?
(Photo: MoD/Crown Copyright)

 

Anew study published by the Stockholm International Peace Research Institute (SIPRI) casts doubt on the feared rise of fully autonomous weapon systems making decisions on their own without human intervention. The report, Mapping the Development of Autonomy in Weapon Systems, aims to shed light on the current developments in autonomy in weapon systems and thereby provide important insights for informed international discussions.

It was published against the backdrop of the first meeting of the Group of Governmental Experts on Lethal Autonomous Weapon Systems (LAWS) at the United Nations in Geneva. An open letter addressed to the UN signed by more than 100 artificial intelligence (AI) experts and led by Elon Musk, CEO of Tesla, which makes electric cars, warned of the dangers of “unregulated” developments in AI, saying it posed the gravest danger to humankind if left unchecked.

“As companies building the technologies in AI and robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm,” the letter states. “Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”

The question of whether lethal autonomous weapons, or LAWS, should be regulated is the focus of an intergovernmental expert discussion within the framework of the 1980 UN Convention on Certain Conventional Weapons (CCW).

SIPRI’s study focused on the technology that enables weapon systems to acquire targets autonomously, which it points out has existed since the 1970s. However, it says current targeting technology still “has limited perceptual and decision-making intelligence.”

“Autonomous systems need to be more adaptive to operate safely and reliably in complex conflict environments,’ says Dr Vincent Boulanin, SIPRI’s expert on emerging military technologies and the main author of the report. “Given the limitations of current technology, humans have to play the crucial role of receiver of tactical information and arbiter of targeting decisions on the battlefield,” adds Maaike Verbruggen, a PhD candidate at the Vrije Universiteit Brussel and co-author of the report.

Most of the armed unmanned aerial vehicles that are operated by a select few countries  are remotely piloted and backed up by a team of sensor operators and data analysts.

 

 

SCIENCE FICTION VS. REALITY

Having a human in the decision-making loop is unlikely to change any time soon, says Boulanin. “Autonomy may transform the way humans interact with weapons, but it will never completely replace them,” he adds.

The UK Ministry of Defence, alongside other NATO allies, has outlined new military doctrine that opposes the development of fully autonomous weapon systems that “operate without trained controllers and traditional chains of command.”

But there are some areas—navigation, flight and obstacle avoidance—where full autonomy may make sense and enable missions to be performed more effectively. For example, a joint UK and US research programme is exploring how driverless trucks in convoy and Hoverbike drones could deliver supplies in the most dangerous “last mile” up to the battlefield.

 

Autonomous resupply on the frontline using unmanned aerial vehicles (Photo by Pvt. Gabriel Silva)

 

 

The UK-US coalition created a “semi-autonomous leader-follower convoy to bring to life concepts which will provide solutions to de-risk the last mile of logistics support.” In this context, autonomy is being used to try and save soldiers’ lives.

But when it comes to lethal autonomous weapon systems, the issue of autonomy, particularly full autonomy where machines make decisions without any human intervention whatsoever, regardless of whether it saves lives or not, conjures up a minefield of legal, ethical and moral challenges. Boulanin says the focus of the UN’s discussions should be on the impact of autonomy on human control. “The core questions of the ethical and legal debate should be: How is autonomy changing the way humans make decisions and act in warfare, and what should be done to ensure that they maintain adequate or meaningful control over the weapons they use?”

SIPRI’s report highlights the need to demystify technological developments such as AI and machine learning so that how these technologies are likely to be used can be more clearly separated from “the realms of science fiction”.

“Machine learning is a technological development that raises many concerns, and also causes confusion in the discussion on the future of autonomy in weapon systems,” says Boulanin.

While militaries may not be in favour of handing all the decision-making over to a machine or weapon operating autonomously, there is an argument for AI and machine learning to be used to help humans respond more quickly to threats.

A case in point is the advent of hypersonic anti-ship missiles, which are being developed by countries like Russia and China. “With speeds in the region of 2,000 metres per second, platforms will soon face an even greater threat: weapon engagements that are so fast, that traditional man-in-the-loop responses will be too slow to counter their deadly effect,” Paul Bradbeer, a technical sales manager in MASS’s Electronic Warfare Operational Services division, writes in the Summer 2017 edition of our magazine.

One answer, says Bradbeer, is to draw on advances in machine learning (a form of AI) to move away from labour-intensive and time-sapping man-in-the-loop processes for initiating electronic warfare responses. In other words, machine learning could help the military more adequately respond to the decision-making challenges posed by "congestion, complexity and confusion in the EW space, which are only exacerbated by the threat of future hypersonic missiles," says Bradbeer.

The application of AI in the military doesn’t end there. The UK’s Royal Navy wants to use AI to develop what it calls a "ship’s mind,” for its battleships. “This will enhance efficiency in the Fleet and allow fast, complex decisions to be made automatically, which will make warships and submarines safer and more effective in fast-moving, war-fighting situations,” the Navy states. 

UK think tank Chatham House says in a paper on AI and future warfare published in January 2017, that military and commercial robots of the future will incorporate some form of AI to enable them to complete missions on their own. “Banning an autonomous technology for military use may not be practical given that derivative or superior technologies could well be available in the commercial sector,” it writes.

Autonomy on the battle field is still largely confined to the air domain and areas such as ordnance detection and disposal, although it is starting to creep into other areas such as mine warfare and anti-submarine warfare. Despite these advances, Chatham House says there is a large disparity between commercial versus military R&D spending on autonomous systems development. It says the defence industry is in danger of falling behind the commercial sector, which is throwing money at technologies like driverless cars.

 

Kalashnikov releases photos of supposedly fully automated combat robots with machine guns (Copyright: Kalashnikov)

 

“The rapid development of commercial autonomous systems could normalise the acceptance of autonomous systems for the military and the public,” Chatham House states in its report, “and this could encourage state militaries to fund the development of such systems at a level that better matches investment in manned systems.”

Either the way, greater autonomy and the use of AI on the battlefield is unlikely to go away.  The benefits to the military are too significant to ignore these technologies purely based on ethical and moral considerations. And other militaries, namely Russia and China, are already developing battlefield technologies that will use AI to make decisions autonomously.

Russia’s Kalashnikov released images recently of combat robots with machine guns mounted on top. According to the state-owned TASS media agency, Kalashnikov plans to develop “a fully automated combat module” based on the technology.