The UK must take a harder stance on lethal autonomous weapons systems and lead the way in stricter policies and restraints, whilst acknowledging and encouraging the rapid and safe development of artificial intelligence (AI).

Written by Ellie Wong. 

In June 2022, the Ministry of Defence released their Defence Artificial Intelligence Strategy that set out the Government’s strategy to become “the world’s most effective, efficient, trusted and influential Defence organisation for our size”.  The Government seeks to approach this in part through cross-sector cooperation, global collaboration with international allies, and shaping AI developments that will promote security, stability and democracy.  

Established in 2021, the Defence Artificial Intelligence Centre (DAIC) marks a pivotal moment in the UK’s AI landscape, actively promoting “military use cases by working collaboratively with international partners across Government, academia, and industry.” One of the Government’s key assertions about the DAIC is its ability to "enhance the mass, persistence, reach, and effectiveness of our military forces."

In the strategy, the Government proposes the idea of mandating equipment programs to be ‘AI ready’ and promises to invest in AI R&D, “where emerging technologies have potential to provide a decisive war-fighting edge”. This includes investment in Intelligence, Surveillance and Reconnaissance (ISR) and leveraging advanced weaponry and strategies such as hypersonic and directed energy.

The British military employed AI for the first time in an operation that engaged Machine Learning to process masses of complex data. The 20th Armoured Infantry Brigade, which trialled this, found that planning time for the human team was significantly reduced and the engine was able to produce results of equal or even higher quality.

Despite the success of this particular operation, campaign groups have laid out concerns that the Government could do more to address the concerns surrounding the use of AI in combat - in particular, on the development of lethal autonomous weapons systems, also known as “killer robots” or “slaughterbots”. These are weapons systems that use AI to “identify, select, and kill human targets without human intervention”. 

The United Nations Association - UK (UNA-UK) criticises the Government’s strategy for failing to communicate its stance on whether it is at all acceptable for automated weapons to identify and automatically fire on human beings. The UNA-UK also raises concerns over the Government’s ambiguity where it does address human involvement in autonomous weapons. 

While the Government emphasises its stance on “context-appropriate human involvement” in autonomous weapons systems, the phrase remains vague in terms of concrete policies. The Government’s strategy fails to address the already identified and potential measures that would establish more substantial human control over autonomous systems. 

The Campaign to Stop Killer Robots, a network of human rights groups and concerned scientists, and the International Committee of the Red Cross (ICRC) are urging states to prohibit “Slaughterbots”. The ICRC recommends the following three core pillars : no human targets, restrict unpredictability, and human control. 

However, at a 2015 UN conference, the UK protested an international ban on developing killer robots, claiming that the “international humanitarian law already provides sufficient regulation for this area” and that the UK itself “is not developing lethal autonomous weapons systems”.

Their opposition to the ban is echoed seven years later. Whilst the strategy acknowledges that there must be “safe and responsible military development and use”, it lacks concrete measures, employing ambiguous language surrounding “UK values” and “ethical use of these technologies”.

But even if the UK is not building killer robots, the Government should not be naive about their existence. According to a UN report, a Turkish Kargu-2 drone was allegedly deployed to “hunt down” members of the Libyan National Army. It suggests that the killer bots were programmed to “attack targets without requiring data connectivity between the operator and the munition”, what they call a “true ‘fire forget and find’ capability”.

The war in Ukraine is an acute showcase of large-scale use of drones on both sides as experts caution that the “proliferation of unmanned aerial vehicles is driving militaries [...] to hand over more and more control to artificial intelligence”.

Whilst it is possible to imagine that a “machine-versus-machine battlefield” can significantly reduce the risk of threat to military troops, there are various concerns surrounding killer robots’, such as their lack of human judgement and algorithmic biases that could in fact shift the burden of harm onto civilian populations. 

Ahead of the autumn summit on AI, led by the UK, the Government must sharpen its position on lethal autonomous weapons systems and take advantage of this opportunity to pave the way on stricter regulation. Ultimately, the Government must learn to simultaneously recognise the benefits of AI, whilst seriously considering the significant risks of killer robots.

Ellie Wong is currently a researcher with the Center for Countering Digital Hate.