Alongside the founders of Google's DeepMind, SpaceX, Tesla and a host of civil society organisations, UNA-UK has pledged to seek a ban for lethal autonomous weapons (LAWS). So far, the pledge, coordinated by the Future of Life Institute, has gained the support of over 230 organisations and over 3000 academics, executives and politicians.
Signatories call upon governments and leaders to "create a future with strong international norms, regulations and laws against lethal autonomous weapons" and warns of an arms race for which the international community is ill prepared to manage. By building civil society, academic and private sector pressure, signatories seek to stigmatise these weapons and trigger national governments to take action to address the attendant security risks.
The initiative comes ahead of this month's meeting of governmental experts in Geneva on the issue of LAWS under the auspices of the Convention on Certain Conventional Weapons (CCW) - a treaty-making body which regulates inhumane weapons. At the previous meeting earlier this year, the vast majority of the 82 countries present were in favour of beginning work in 2019 on a legally-binding instrument to regulate LAWS. However, the CCW is a consensus-bound forum and those supporting a ban on LAWS face tough opposition from powerful states, including France, Israel, Russia, the United Kingdom and the United States.
As an active member of the Campaigns to Stop Killer Robots, UNA-UK will continue to campaign for the UK to support a prohibition.
Read the Future of Life pledge below and find the full list of signatories here.
Read our recent update on the UK's policy position
Lethal Autonomous Weapons PledgeArtificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.
In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine. There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable. There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual. Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems.
Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage. Stigmatizing and preventing such an arms race should be a high priority for national and global security.
We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. These currently being absent, we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons. We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge.
Photograph: © Crown copyright 2013