A major South Korean tech institution is partnering with one of the country’s largest weapons manufacturers to be a part of “the global competition to develop autonomous arms”. On April 5, over 50 artificial intelligence researchers and experts announced a boycott of the South Korean University unless the project is stopped.
The target of the boycott is the prestigious and highly-influential Korea Advanced Institute of Science and Technology (KAIST). The organization regularly partners with industry in a variety of ways, most of which do not raise many ethical concerns. When they announced their alliance with Hanwha Systems, one of the country’s biggest arms companies, with a goal to develop weapons guided by artificial intelligence, the global reaction to the deal was quick, strong, and damning.
Since KAIST relies heavily on its interconnections with technology experts from around the globe, those same experts decided one of the most effective means of pushing back would be a boycott of all support for KAIST. They picked April 5 as the target date to launch that boycott.
In their announcement, over 50 artificial intelligence leaders from nearly 30 countries published a letter denouncing the move to develop what could be a race of “killer robots”. In that letter, they said,
“At a time when the United Nations is discussing how to contain the threat posed to international security by autonomous weapons, it is regrettable that a prestigious institution like KAIST [Korea Advanced Institute of Science and Technology] looks to accelerate the arms race to develop such weapons.
“If developed, autonomous weapons will be the third revolution in warfare. They will permit war to be fought faster and at a scale greater than ever before. They have the potential to be weapons of terror. Despots and terrorists could use them against innocent populations, removing any ethical restraints. This Pandora's box will be hard to close if it is opened. As with other technologies banned in the past like blinding lasers, we can simply decide not to develop them. We urge KAIST to follow this path, and work instead on uses of A.I. to improve and not harm human lives.”
Hanwha Systems, KAIST’s partner in what they would be creating, already manufactures deadly cluster munitions. Such devices are not only capable of major loss of human life and property, they are also prohibited by a U.N. convention signed by 120 countries. While the AI research collaboration might not produce equally destructive weapons, the track record for Hanwha suggests a far more serious outcome.
The letter’s signers said they will “boycott all collaborations with any part of KAIST until such time as the President of KAIST provides assurances, which we have sought but not received, that the Center will not develop autonomous weapons lacking meaningful human control.” They group also vows that none in their numbers will “visit KAIST, host visitors from KAIST, or contribute to any research project involving KAIST.”
The strong response against what KAIST and Hanwha planned to do apparently already embarrassed KAIST into deleting its announcement of Hanwha. Despite that, there is no sign that KAIST plans to drop the arrangement.
The decision by KAIST is happening at the same time as the U.N. is convening this week in Geneva to discuss the militarization of artificial intelligence. Activists hope the meeting will conclude with an agreement to ban all such developments within their own borders, and not to buy or resell such technology to others.