You are here:

New UK position on autonomous weapons - recognises lines need to be drawn but lacks detail or leadership

Published on


Defence Artificial Intelligence Strategy cover

The following statement is an initial reaction from members of the UK Campaign to Stop Killer Robots

As part of the long-overdue Defence Artificial Intelligence Strategy, the UK Government has unveiled a new policy on autonomous weapons - so-called “killer robots”.

Published as an annex on “Lethal Autonomous Weapons Systems”, to an accompanying policy paper, the Government recognises that systems which identify, select and attack targets without “context-appropriate human involvement” would be unacceptable.

This is significant - as it continues to recognise that there is a line that should not be crossed in terms of machines making decisions in combat. It is also positive that the new UK position recognises that ethical principles, along with the law, are important for determining where this line should be drawn.  We also welcome that the Government has dropped the problematic language linked to the notion of “full autonomy” as previously defined by the UK as involving a “system capable of understanding higher level command intent”.  This previous sci-fi-esque framing centred concern on futuristic capabilities, which are a long way off and may never be realisable, thus detracting from urgent concerns about weapons on the brink of production. The new position further asserts that mechanisms of human authority, responsibility and accountability will always be present when force is used.

However, the position provides no indications of how “context appropriate human involvement” is to be assessed or understood. In this new formulation, “context appropriate human involvement” could mean almost anything and the lack of detail about where the UK draws the line amounts to the UK public being told by the military “leave it to us” to determine what is appropriate. This is emblematic of our previous concerns that this policy position, as well as the wider Defence AI Strategy, was formulated without public consultation or any attempt by ministers to have a national conversation on this issue. 

What’s missing?

In the international discussion on autonomous weapons there is widespread recognition that certain factors are necessary for meaningful human control, or sufficient predictability, over autonomous systems, including:

  • human control over the duration and geographic scope of an autonomous system’s operation – these are vital to making judgements about the possible effects of a system’s use.
  • human understanding of the target profiles a system uses – this is vital to understanding what will trigger the system to apply force, including the possibility of false positives (targeting things that are not the intended targets) – and this is also an area where AI and machine learning raise particular concerns.

Regrettably, these factors are not mentioned in the policy.

There is also no indication of whether the UK considers it acceptable to allow machines to identify and automatically fire on human beings. The International Committee of the Red Cross (ICRC) and civil society organisations have urged states to reject autonomous systems that would apply force directly to people – but the UK policy position does not suggest that targeting humans with autonomous systems raises any particular concerns.

The absence of any detail about how the lines should be drawn makes it difficult to see how the UK can provide leadership in this space.  Indeed, the policy devotes a whole paragraph to arguing that global governance in this area will be difficult. Global governance is always challenging – but all the more so if you are approaching it without a clear vision of what needs to be achieved.

It is also concerning that the policy seems to be based on the mistaken premise that legally-binding rules would necessarily constrain technological advances in AI. In fact, scientists and developers around the world are calling for more specific and tighter regulations on autonomous weapons and dual-use technology to protect their innovations from being misused.

In its final section, the policy sets out a very loose set of ambitions that are notable in their refusal to suggest that new international law is needed on the issue.

The UK says that the UN Convention on Conventional Weapons (CCW) will be its primary avenue for work. But it neglects to note that a majority of states in that forum are calling for new international law or that the CCW is, by its very nature, a forum intended to create new international law, and yet the UK is one of the states that have been refusing to support a mandate for work on a new legal Protocol.  Worse still, the forum is deadlocked by its rules of procedure and is incapable of agreeing any consensus outcome in the current political climate - a political climate that needs new rules and standards over new weapon technologies, where the UK stands to benefit from the establishment of such standards.

Overall, the UK position shows at least a continued recognition that there are lines that need to be drawn to establish norms and standards around autonomy in weapons systems. But it contains none of the clarity and vision needed to suggest that the UK will have a leading role in bringing those norms and standards about.

The positive side of the indistinct UK policy is that there is little in it that should bar them from joining processes led by others. In the near future they are likely to introduce proposals or tie themselves to US-led initiatives in the CCW (and possibly beyond) which seek to establish non-binding mechanisms, such as compendium of good practice or a code of conduct to ensure that autonomous weapons systems can be achieved in compliance with international humanitarian law (IHL). Although these voluntary measures are important for documenting good practice and building confidence, they are not sufficient to ensure civilian protection, compliance with IHL and ethical acceptability.

We are grateful to the tirelessness of parliamentarians who have championed a more progressive UK policy on this issue and are pleased that their work, along with pressure from civil society, appears to have played a role in forcing the Government to explain their approach to this issue in much fuller terms. We look forward to working together to urge the UK to strengthen and add detail to this policy to ensure decisions over life and death are never delegated to a machine.

Photo: Defence Artificial Intelligence Strategy cover.