You are here:

UK Stop Killer Robots - 2022 Parliamentary Summary and Analysis

Published on


Throughout 2022, UNA-UK worked with parliamentarians and the UK Government to raise awareness of concerns over lethal autonomous weapons systems - an emerging type of weapon system which can identify and kill people without meaningful human control.

UNA-UK conducted this work with our partners in the UK Campaign to Stop Killer Robots - a coalition to which we provide the coordinator. The following resource is intended as a detailed listing, together with campaign analysis of recent parliamentary activity on this issue.

UK Stop Killer Robots: Summary of recent UK parliamentary engagement on autonomous weapons and analysis of the UK government's answers to written questions submitted by parliamentarians

As a result of the committed work of the informal cross-party group of UK parliamentarians who are concerned about growing autonomy in weapons systems, the past two years has seen a huge increase in parliamentary scrutiny of the government’s position on autonomous weapons systems (AWS). Substantive engagement with the challenges posed by increasing autonomy in weapons systems has been brought to parliamentary debate. These developments will be crucial to influencing UK policy and improving the government’s position on this issue towards effective international regulation of autonomy in weapons systems.

The following detailed round-up picks out key findings and opportunities based on a raft of relevant parliamentary activities, including over 20 parliamentary questions, 10 oral parliamentary interventions and 3 related events with the participation of parliamentarians from all major UK political parties.

The UK Stop Killer Robots campaign would like to thank all the parliamentarians for their work to drive growing interest in and attention to discussions around the implications of AWS in both houses of parliament.

Recent highlights in parliament

  • In April 2021, the Campaign was involved in the Integrated Review of Security, Defence, Development and Foreign Policy, which announced ambitious investments in AI and autonomy technology for military applications. Ahead of the Review we engaged with the process via the consultation and following its publication, we launched a campaign reaction, a series of briefings, and conducted outreach to relevant stakeholders to call for such investments to be accompanied by a clear ethical code and upgraded arms control regulations to safeguard human rights, ethical and moral standards.
  • On 1 November 2021, multiple members of our parliamentary champions spoke in a House of Lords debate, triggered by Lord Clement-Jones, Liberal Democrat spokesperson for Digital and former Chair of the House of Lords Artificial Intelligence Select Committee. As stated in the debate analysis we published, the debate saw the Government minister under significant pressure from all sides of the House. Speaking from the Shadow Front Bench, Lord Coaker asked for an unequivocal commitment from the Government that there will “always be human oversight when it comes to AI”. Former Defence Secretary, Lord Browne, asked for the UK to publicly reaffirm its commitment to ethical AI, while Lord Clement-Jones voiced the campaign's objective in favour of a legally binding instrument on AWS. This builds on the corresponding call made in the House of Commons by SNP's Alyn Smith MP (another of our champions) back in December 2020.
  • On 23 November 2021, an amendment tabled by Lord Browne to the Armed Forces Bill was debated which would require the UK to review the implications of AI in weapons systems on British armed forces as well as the role played by the UK in international efforts to regulate autonomous weapons systems. During the debate, Campaign analysis was used to rebut the UK’s updated and comparatively more ambiguous position which was delivered earlier that month. The amendment was ultimately withdrawn but during the debate several Champions spoke in detail outlining concerns to the minister and scrutinising the UK’s position - the Minister committed to engaging with parliamentarians and civil society on this issue.
  • In January 2022, civil society organisations teamed up with parliamentarians from all major parties to discuss the link between murky financial investments and killer robots in an event entitled: Opaque Investments, autonomous weapons and the decay of democracy. The event saw contributions from Kevin Hollinrake MP, Baroness Bennett, Alyn Smith MP, Lord Browne and Lord Clement-Jones.
  • On 31 January 2022, for the first time a UK Secretary of State (the Foreign Secretary) had to respond to an oral question on LAWS in the House of Commons.
  • On 18 May 2022, during the Queen's Speech debate, Lord Browne reinforced the call for international legal instrument to regulate autonomous weapon systems: "Advocating international regulation would not be abandoning the military potential of new technology; it is needed on AWS to give our industry guidance to be a sci-tech super- power without undermining our security and values. Weapons that are not aligned with our values must not be used to defend our values." Citing the work of the campaign, during the same debate Lord Sentamu also emphasised that the call for regulation is meant to safeguard the use of scientific knowledge, rather than limit scientific advancement in the area of AI.
  • On 25 May 2022, during a debate on the Report from the Liaison Committee "AI in the UK: No Room for Complacency," several members of this group addressed the issue of AWS in their remarks. Lord Browne addressed the rapid trend towards increasing autonomy in weapons systems and called on the UK to "establish leadership domestically and internationally, when it is ethically and legally appropriate to delegate to a machine autonomous decision-making about when to take an individual’s life." Citing the need to develop an appropriate ethical framework for the development and application of AI, the Lord Bishop of Oxford unequivocally stated that the autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
  • On June 21, 2022, Lord Browne pressed for clarification on the goals and ambitions of the Defence and AI Strategy, calling for either the Secretary of State for Defence or the Prime Minister to explain to Parliament the plans for the Strategy's implementation, a call that is supported by our campaign.
  • On 7 September, the All Party Parliamentary Group on AI held an evidence meeting on national security and defence looking at autonomous weapons, chaired by Stephen Metcalfe MP and Lord Clement-Jones, to which the Lord Bishop of Oxford, Dr. Mariarosaria Taddeo (Oxford Internet Institute), Dr. Sidharth Kaushal (RUSI) and campaign members Taniel Yusef (WILPF) and Verity Coyle (Amnesty International) also gave evidence.
  • In August-September 2022, nine parliamentary questions were submitted to obtain clarification on the Defence AI strategy (UIN HL1997, UIN HL1998, UIN HL2031, UIN HL2032, UIN HL2033, UIN HL2086, UIN HL2087, UIN HL2088, UIN HL2089). As these are the latest responses from the government, the campaign has produced a detailed analysis (see below) of what these answers tell us about the government’s current position and what are the areas where further debate and clarification is needed.
  • On 18 October, AI expert Dr Stuart Russell gave a high profile lecture to House of Lords members, hosted by the Lord Speaker and chaired by Lord Clement-Jones. This initiative also resulted in a blog on the Politics Home website by Dr Russell, reinforcing his call for action on lethal autonomous weapons to UK political audiences.

Upcoming activity

On your initiative, more will be taking place in the coming months including:

  • We hope that the proposal for a special one-year select committee inquiry in the House of Lords on autonomous weapons will be accepted (this would represent a follow up to the AI select committee inquiry published in 2018, which identified autonomy in weapons systems as a particular area of concern).
  • The Campaign will submit evidence to the House of Commons Science and Technology Select Committee's inquiry on the Governance of Artificial Intelligence

Our reaction to the most recent government answers

In response to the UK government’s release of its Defence AI Strategy and accompanying policy statement ‘Ambitious, Safe, Responsible’ (see UK Stop Killer Robots campaign’s reaction here), and following a meeting between some parliamentarians in this group, MoD officials and the Minister, nine Parliamentary Questions were tabled to seek official answers on some key concerns and unanswered issues around UK government policy on autonomous weapons. (The full questions and answers are available at: UIN HL1997, UIN HL1998, UIN HL2031, UIN HL2032, UIN HL2033, UIN HL2086, UIN HL2087, UIN HL2088, UIN HL2089).

In general, the key things we believe must be pursued with the UK government as they develop concrete guidance and mechanisms for oversight to implement their broad strategy and policy around AI and AWS are:

  • To generate discussion about the details of ‘context appropriate human involvement’ at the stage of weapons systems’ use, through raising specific policy questions (rather than open-ended requests to the government for more detail): for example, at the time of use of weapons systems, does the government agree that users need to understand the actual effects a system will create in the specific context in order for such a use to be acceptable
  • To press the government to fully examine the ethical and human rights issues posed by autonomous weapons systems targeting people. For example, does the government find systems that identify people as targets based on biometrics, or on the basis of perceived gender/racial/age characteristics as acceptable?
  • To encourage greater transparency in the development of policy and in developing meaningful mechanisms for regulation, oversight and accountability in its implementation, including through continued engagement with parliamentarians, civil society and other experts and stakeholders

The campaign’s reaction to and analysis of the details of the government’s responses to the latest parliamentary questions is as follows:

  • The government continues to maintain that new international law on autonomous weapons is not needed - despite wishing to set “clear international norms for the safe and responsible development and use of AI.” The government opposes the “creation and use” of systems “which operate without meaningful and context-appropriate human involvement throughout their lifecycle”, saying these are unacceptable (answer to question UIN HL2031 tabled by the Lord Bishop of Oxford, and the government’s policy statement). The government states that the UK’s priority is “setting clear international norms” to ensure legal compliance with International Humanitarian Law (IHL) through “meaningful and context-appropriate levels of human control” (answer to question UIN HL2032 tabled by the Lord Bishop of Oxford). Nevertheless, in answer to question UIN HL2033 tabled by the Lord Bishop of Oxford on whether the government would therefore support an international legal instrument containing prohibitions and regulations to achieve this goal (a policy approach around which convergence is developing internationally) the government re-stated its position that it did not, as IHL already provided a “framework for regulation.” Though IHL does indeed provide a starting point on many challenges relating to autonomy in weapons systems, it is clear from almost a decade of international debate that IHL is not sufficient for addressing all concerns - additional norms are needed. These 'clear international norms' desired by the government would be best achieved through a legal instrument, as the strongest tool available to the international community.
  • The government’s approach on the value of legal regulation, and promoting research, appears inconsistent. As justification for its opposition to an international legal instrument on autonomous weapons, the government also stated in answer UIN HL2033 that “a legal instrument would have to ban undefined systems” given the current lack of international consensus on definitions - and that this would impact “legitimate research” on AI and autonomous technologies. This position raises apparent inconsistencies:
    • With respect to autonomous weapons systems, the UK government frequently argues against the value of new international law to regulate the issue. In a context of pressure on international norms and the UK’s commitment to a ‘rules based international order’ this is both disingenuous and dangerous. Having to “ban undefined systems” is a mischaracterisation of the task of a potential future negotiating process - whose purpose would be to develop clear common understandings and rules. There are clear incentives for the UK government to be involved in shaping these multilaterally agreed rules. At the domestic level, the government will be developing definitions and rules itself - having decided that certain systems and uses are unacceptable, and that norms must be developed around human control - which would contribute positively to any international legal process.
    • The reasoning around the negative impacts of an international treaty on AI research also appears at odds with government policy in other areas. There is a specific government inconsistency on both the desirability of new law and whether new legislation would stymie innovation when it comes to autonomous maritime vehicles vs autonomous armed vehicles. The UK Government has long maintained that new domestic and/or international legislation on autonomous weapons could stymie UK innovation and damage the tech sector, as part of its stated aversion to new legal frameworks. However, on the related issue of autonomous maritime vehicles, the UK argues in the opposite direction. In response to a Lords International Relations and Defence Committee report into the United Nations Convention on the Law of the Sea (UNCLOS), the UK Government states that a new domestic legal instrument should be developed for remotely piloted and autonomous maritime vehicles “when parliamentary time allows”. In this case their stated motivation is to “prevent regulatory hurdles becoming a blocker for the pace of innovation”. This appears wholly at odds with their position on AWS.
      It is worth noting that in contrast to the UK Government, the tech sector widely supports the creation of new legal frameworks to regulate autonomy in weapons systems. It is the widespread view of tech leaders that new international and national legal frameworks are required to regulate autonomy in weapons systems not only because delegating life and death decisions to a machine would be legally and morally wrong, but also to protect the work developed for peaceful purposes from being misused through incorporation into AWS - see the Future of Life Pledge for more information.
  • The government limits the forums it considers important for working with partners and allies on AWS/AI issues: it must be sure to be open to participation in a wider range of forums.. In answer UIN HL2087 to a question by Lord Clement-Jones on how the government will respond to the challenges to global governance it identifies in the area of AWS, the government outlines a general approach to “mitigating the potential impacts of AI, including its proliferation, misuse and potential for misunderstanding and miscalculation” through working with partners in forums including the Convention on Conventional Weapons (CCW), NATO, Global Partnership on Artificial Intelligence (GPAI), UNESCO and the Council of Europe. In answer to question UIN HL2089 it also outlines an aim to build consensus with allies and partners for “safe, responsible and ethical use…globally”. The government’s general commitment to international cooperation and the building of consensus is welcome - notwithstanding its resistance to legal regulation and the gaps in its approach in our view (as outlined below). However, to be consistent with its commitment to a rules-based global order, it must participate in all relevant forums and discussions looking to build global norms in this key area for international security - including any discussions on a legal instrument that are likely to emerge in the coming years.
  • The government is not engaging with specific concerns around autonomous weapons targeting people: In questions UIN HL2032 and UIN HL2031 tabled by the Lord Bishop of Oxford, the government is asked whether it can provide assurance that weapons systems will “remain under human supervision at the point when any decision to take a human life is made” and what assessments it has made of the “specific ethical problems raised by autonomous weapons that are used to target humans”. The answers neither provide this assurance nor state that such an assessment has been made (only that the government is aware in general terms of ethical concerns around the impact of “misuse” and legal violations). For the campaign, autonomous weapons systems targeting people are unacceptable and should be prohibited as part of a legal instrument. There are clear concerns around human dignity, civilian protection, legal challenges and discriminatory outcomes. The International Committee of the Red Cross (ICRC) and a large number of states at the CCW have also raised these concerns. More substantial engagement with these issues is needed by the government: and it is not enough to say that autonomous weapons systems targeting people are not yet prohibited by IHL.
  • The government’s approach on maintaining meaningful human control has useful elements - but more detail is needed: In answer to question UIN HL2032 regarding when human supervision may act as "an unnecessary and inappropriate constraint on operational performance", as stated in the policy paper Ambitious, Safe, Responsible, the government noted that human control could not only take the form of real-time human supervision but be also "exercised through the setting of a system's operational parameters”. It is the perspective of the campaign, the ICRC and others that exercising meaningful human control can involve both supervision and measures such as limiting the time and area of operation and types of target attacked by weapons systems that apply force following human intervention based on processing sensor data (such as the missile defence systems mentioned in this answer). The crucial issue is where these lines are drawn, beyond general principles. Acknowledging that this cannot be fitted into a parliamentary answer (and that, in international discussions, the UK has proposed this is something for states to discuss together in working on ‘best practices’), more detail is needed - it would be beneficial to approach the requirements for meaningful human control through discussions over specific policies that are needed to ensure clear and transparent parameters of where and how lines should be drawn. It is encouraging that the government emphasises that responsibility must be “underpinned by a clear and consistent articulation of the means by which human control is exercised across the system lifecycle, including the nature and limitations of that control” - clear and transparent articulations of what this means in practice in different scenarios will be necessary to evaluate the UK’s approach. In response to question UIN HL2086 tabled by Lord Clement-Jones on how the concept of “context appropriate human involvement” will be elaborated, the government notes that it is exploring processes for this. It notes that the results of article 36 weapons reviews would be “fed into usage instructions and authorities on particular systems” to ensure “lawful and responsible use” is understood. Though a welcome articulation of procedure, including the reference to “responsible use”, such reviews are not transparent, and other mechanisms for oversight over the lines drawn are urgently needed.
  • More detail on transparency, accountability and oversight mechanisms for the implementation of the Defence and AI strategy and accompanying policies is needed: In answer to question UIN HL1998 tabled by Lord Clement-Jones on the steps that will be taken to ensure researchers operate in an environment of adequate controls to prevent their work being used for problematic applications, the government provided a general reply, focusing on the implementation of its commitments in the Defence AI strategy. It is encouraging to see a general commitment to transparency and scrutiny in the government’s approach: though more information would be welcome on the modalities envisaged for steps such as publishing information about safeguards and specifying how applications and algorithms will be used - and what mechanisms for compliance and accountability there will be beyond the AI Ethics Advisory Panel (whose remit presumably does not extend beyond an advisory role). In answer to question UIN HL2088 tabled by Lord Clement-Jones on the mechanisms for compliance and oversight for the use of AI in defence, the government notes that it is currently developing frameworks for this. It mentions the appointment of “'Accountable Officers' ensuring oversight for AI activity”, technical oversight from the Defence Artificial Intelligence Centre, and “exploring how the Defence Safety Authority will consider AI”. In answer to question UIN HL2086 the government also draws attention to its obligation to conduct weapons reviews, which whilst necessary are not transparent. Again, it will be essential to understand how and which strong mechanisms for establishing standards and procedures and ensuring compliance will be put in place - and how independent scrutiny, including from parliament, of the specific lines drawn will be ensured. We appreciate the government’s commitment in the answers to continue engaging with NGOs, industry and other stakeholders on issues related to AWS and AI in defence.
  • Acknowledging that the Ministry of Defence is developing its work around the Defence AI strategy and clarifying broad principles into specifics and procedures, there is some way to go towards assurance that the government’s approach will meet the ethical, legal and global peace and security concerns around AWS. For the campaign, the boundaries the government has so far indicated it is willing to draw are insufficient - for example in not considering issues around targeting people. We also note that in answer to question UIN HL2089 tabled by Lord Clement-Jones regarding belligerent rhetoric around AWS and its presumed role in generating “overwhelming force” made in the House of Commons, the government did not give its opinion on the risks to peace and security of setting out such a position. A potential to overestimate the capabilities of systems is also indicated in this answer (with the slogan-ish statement that “machines are good at doing things right; people are good at doing the right things” in the context of human-machine ‘teaming’). Further debate and scrutiny of the government’s strategic approach and the lines of acceptability and practice it intends to draw in relation to AWS will be needed

The UK at the Convention on Conventional Weapons (CCW)

At the latest meeting of the Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems at the Convention on Conventional Weapons, held 25-29 July in Geneva, the UK delegation presented the Defence AI Strategy, and its position that systems “which operate without meaningful and context-appropriate human involvement throughout their lifecycle are unacceptable” and unlawful.

Along with a group of states including Australia, Canada, Japan, Korea and the US, the UK has proposed to the CCW this year that the GGE should focus on elaborating non-binding principles and best practices on autonomous weapons systems, noting that certain systems and uses would be already prohibited by law, and that positive obligations/regulation would be needed to mitigate risks and establish requirements around human control. In its national capacity, the UK delegation is recommending the development of a compendium of best practices, and has produced a list of guiding questions to commence the collective development of this.

Neither of these proposals has gained widespread support or consensus - in the context of a forum in which the adoption of any mandate beyond continued discussion is currently blocked (primarily by Russia), despite the support of over 80 states for the negotiation of a legally binding instrument.

Though the UK continues to oppose legal measures to regulate autonomous weapons systems, it nevertheless continues to contribute serious thinking and useful principles and approaches to discussion around what meaningful human control over weapons systems should involve.

Another step in the right direction was demonstrated by the UK government's endorsement, along with 69 other states, of a Joint Statement on Lethal Autonomous Weapons Systems delivered on October 21, 2022, at the UN General Assembly First Committee on Disarmament and International Security. The statement emphasises the necessity for human beings to exert appropriate control, judgement and involvement in relation to the use of weapons systems in order to ensure any use is in compliance with International Law. It also recognises the increased convergence amongst states on key substantive issues, in particular, the approach based on the prohibition of autonomous weapon systems that cannot be used in compliance with IHL, and the regulation of other types of autonomous weapon systems.

The campaign welcomes the UK recognition that the human element must remain central in the use of force. However, given the prevailing vagueness in relation to the when and how such control must be exerted, we consider the UK government'’ current policy insufficient to maintain human dignity, control and civilian protection in the face of increasing autonomy in weapons systems.


Read more:


Photo: The House of Lords. Taken on March 11, 2008. Credit: UK Parliament.