Science & Tech

Artificial Intelligence Experts Pledge Not to Help Build Terminators

On July 18, individuals, companies and other organizations banded together in a rare sign of international unity to sign a joint pledge saying they would “neither participate in nor support the development, manufacture, trade or use of lethal autonomous weapons”.

The pledge also said the groups would demand of governments to “create a future with strong international norms, regulations and laws” to protect against what in old science fiction movies was once called “the rise of the machines”.

The agreement was developed by the Future of Life Institute (FLI) and reads in full as follows:

Lethal Autonomous Weapons Pledge

Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.

In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine. There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable. There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual. Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems. Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage. Stigmatizing and preventing such an arms race should be a high priority for national and global security.

We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. These currently being absent, we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons. We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge.

The pledge has been signed by over 160 companies, 2,400 individuals from 90 countries and groups from three dozen countries. The Future of Life Institute (FLI) also noted in its publishing of the pledge that separately from this pledge 26 countries in the United Nations have “explicitly endorsed the call for a ban on lethal autonomous weapon systems”. These include: Algeria, Argentina, Austria, Bolivia, Brazil, Chile, China, Colombia, Costa Rica, Cuba, Djibouti, Ecuador, Egypt, Ghana, Guatemala, Holy See, Iraq, Mexico, Nicaragua, Pakistan, Panama, Peru, State of Palestine, Uganda, Venezuela, Zimbabwe.

When the pledge was unveiled on July 18 at the annual Joint Conference on Artificial Intelligence (IJCAI) in Stockholm, Sweden, FLI President and MIT Professor Max Tegmark, said, “I’m excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect. AI has huge potential to help the world—if we stigmatize and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons and should be dealt with in the same way.”

One of the pledge signatories, Anthony Aguirre, a professor at the University of California –Santa Cruz said in a statement to CNN about the topic that, “We would really like to ensure that the overall impact of the technology is positive and not leading to a terrible arms race, or a dystopian future with robots flying around killing everybody.”

Part of the power of the pledge is in what it says about the signers, of course. Another part of it is in how it can affect public opinion of those who have not signed but might still be impacted by it. As signatory and AI expert Yoshua Bengio of the Montreal Institute for Learning Algorithms, said, this kind of pledge can act as a sort of public shaming mechanism for those who have not yet signed up. In an interview with The Guardian, he said that, “This approach actually worked for land mines, thanks to international treaties and public shaming, even though major countries like the U.S. did not sign the treaty banning land mines.”

Whether this agreement will have the same effect is unfortunately questionable. For one thing, land mines have little value other than to blow people up. They are therefore by definition “bad things” for which pledges and public shaming tend to work well. For Artificial Intelligence (AI), the problem is very different. Much of the technology development inherent in AI, whether it be code, electronics, sensor mechanisms, adaptive machinery and more, has the ability of being used as much for good as for anything else. That means AI will continue to chug along developing subsystems to do many things which could eventually become part of “killer robots”, even if the companies involved might argue they won’t do anything to support making such things.

A second issue is that signing this is a little like shutting the barn doors to keep the horses which already left from escaping. In truth, the U.S. is already well down the path developing these kinds of weapons, since AI-like technology is already embedded in everything from machine systems controls to targeting technologies.

Deep black American military technology is often decades ahead of publicly known technology. DARPA, the Defense Advanced Research Projects Agency, has a vast budget to create future weapons. Its secret deep black version has an even greater budget to make weapons of the future a reality today.

Other countries, like China and Russia, are also already well on their way with weapons of these kinds. Signing a pledge won’t make them “undo” their work.

Finally, unlike with nuclear weapons, many of the military applications of AI will be promoted as “defense” not “offense”, which is what the pledge seems to imply are the bad things. Using AI for “defense” sounds like it would be okay under the pledge–even if “Killer Robots for Peace” doesn’t exactly have a positive ring to it.

Still, the signing of this kind of agreement raises the level of awareness of what AI must not become, which is yet another way to wage war on our fellow humans. Though it will be hard to keep it from being part of that, perhaps–with awareness and intent–the technology may not find its way into the ugliest of weapons for at least some time into the future.

The next step for those opposed to the use of AI in autonomous weapons systems would be to pledge to work together to develop the means to detect and defeat such technology.

With the fusion of electronic and genetic technology we may not be able to even recognize what is AI, what is genetically enhanced human and what is partially human but mostly cyborg. Reports by regular human soldiers of hyper-lethal and bullet-proof super soldiers operating in Iraq suggest that such weapons already exist. The Iraqi people were defenseless against such weapons. What will we do when they are unleashed on us?