Jump to content

Professionalism/The 2015 Open Letter in Autonomous Weapons

From Wikibooks, open books for an open world

Case study into professional ethics behind the 2015 Open Letter on Autonomous Weapons

In Correspondence with STS 4600 at the University of Virginia.

Introduction

[edit | edit source]

In discussing autonomous weapons the letter specifically discusses “offensive autonomous weapons” that “select and engage targets without human intervention.” This casebook will not deeply explore other uses of A.I. in combat or defense, but will focus on the main argument of the open letter, the current status of autonomous weapons, and historical examples that may inform future actions on this subject.

The terms autonomous weapons and Artificial Intelligence (A.I.) weapons are used interchangeably as they are comparable in this analysis of human independent weaponry.

The Letter

[edit | edit source]

“In summary, we believe that A.I. has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military A.I. arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”[1]

Announced in July 2015 at the opening of the International Joint Conferences on Artificial Intelligence (IJCAI), the letter campaigns for a total ban on the development of autonomous weapons. To date, the letter has been signed by 4502 A.I. / Robotics researchers and over 26,000 others, including professors at major universities, professionals at major computer science firms, and major names like Stephen Hawking, Elon Musk, and Steve Wozniak. The main argument of the letter is that autonomous weapons will be viable within years, and while they offer major advantages in battle, they are so easy to use that abuse is inevitable. The letter compares the weapons to multiple analogous historical cases, and warns that if developed, AI weapons will be cheaper than automatic guns and more effective than nuclear weapons.

How the Technology Works

[edit | edit source]

There are many different types of autonomous weapons: drones, firearms, tanks, etc. What unifies these weapons is their ability to operate without direct human interaction.

Image classification

[edit | edit source]

The base technology required for weapons to evolve into autonomous weapons is the ability take visual input and identify targets. This is accomplished by using cameras to capture real time video and running that video through image recognition algorithms to find potential targets. For these algorithms, identifying any given human is easy, but it becomes extremely more difficult when you try to distinguish combatants from non-combatants. This is because what determines a combatant from a non-combatant comes down to a lot of contextual and situational information that can change from case to case.

State of Autonomous Weapons

[edit | edit source]

“Artificial Intelligence technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades” [1]

Since the theory behind implementing rudimentary A.I. weapons is so simple, there are many examples of home-made versions. A student in a high school introductory programming and robotics course developed a pan and tilt turreted NERF gun that could aim and shoot using only facial recognition.[2] Other youtube videos offer schematics and assembly instructions for “Do it yourself” nerf and air-soft turrets.[3][4] These examples were all implemented using cheap and easily accessible circuit boards, motors, and programming platforms.

Higher level A.I. weapons research has also been ongoing since the release of the open letter. From 2016 to 2019 the Pentagon budgeted $18 million for autonomous weapons technology, and have ongoing contracts with Amazon and Microsoft to develop A.I. as the “centerpiece of its weapons strategy”.[5] Mr. Work, former Deputy Secretary of Defense, is a strong advocate of A.I. in warfare. He encouraged the Department of Defense to invest in A.I. as a way to “have an advantage as we start the competition” uncannily similar to the Cold War and a new era of the arms race. As early as 2016 military testing on facial recognition on drones was better at identifying non-combatants than humans in specific scenarios.[5][6] While the drone was not equipped with the authority to engage these targets, the implementation of that aspect of autonomous weapons is remarkably simple.

Historic Comparisons: Technology

[edit | edit source]

“Autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms…. [they] will become the Kalashnikovs of tomorrow.” [1]

The 2015 letter provides multiple historical cases that give insight into the potential impact of autonomous weapons, as the “third revolution in warfare”. Scharre, the author of Army of None, states that these weapon systems could create “flash wars” where combat happens at such a fast pace between the weapons that humans can’t keep up to consider the implications of each attack.[7] This mirrors the second revolution in warfare, nuclear weapons. With nukes, it was now possible to decimate entire cities immediately, like when 140,000 people died after the uranium bomb was dropped on Hiroshima, and 75,000 at Nagasaki.[8] The speed of “flash wars” could quickly become as destructive as nuclear weapons. Another similarity to nuclear weapons is the potential for an arms race. After World War II the invention of nuclear weapons resulted in a competition for supremacy with massive increases in both development and production of nuclear weapons.

Nuclear Weapons Stockpiles During Arms Race

The signatories of 2015 letter believe that the arms race potential for autonomous weapons is even greater, “virtually inevitable”. They suggest that while the nuclear arms race mainly involved just the US and USSR an autonomous weapons arms race could involve many more countries. The barriers to enter this arms race will be much lower since “unlike nuclear weapons, [autonomous weapons] require no costly or hard-to-obtain raw materials.”[1]

The letter also references Kalashnikov rifles, the most popular gun design in the world. Of the estimated 500 million firearms worldwide 100 million are Kalashnikovs with about 75 million of those AK-47s.[9] Advantages include simplicity of design and a service life of anywhere from 20 to 40 years. The gun is easy to manufacture, use, and repair, and it is reliable and cheap.[10] In Killicoats “Weaponomics” paper he found that AK-47s were being traded for only $40 in Eastern Europe and Asia and as low as $12 in Africa and the Middle East.[9] Originally developed for the Russian army, the low cost and huge supply has led to AK-47s being used by revolutionaries, terrorists, cartels, and criminals.[11] The letter asserts that this problem will be replicated with autonomous weapons. The small cost will lead to an endless supply of the weapons and creators of the weapons will not be able to stop them from getting into nefarious hands. With the potential power of these weapons, the danger of their use in terrorism is limitless.[1]

Benefits and Drawbacks

[edit | edit source]

“The key question for humanity today is whether to start a global A.I. arms race or to prevent it from starting.” [1]

The open letter provides one example on a double bladed effect of autonomous weapons. Replacing soldiers with machines will reduce casualties and cost for the owner.[12] The latent effect however is a reduced threshold for entering battle, and the potential not only to be drawn into more conflicts but also to increase the number of unsavory conflicts that are entered due to the lack of American lives at risk.

Autonomous weapons might also have the ability to reduce or remove certain human biases. A lack of racism or cultural biases could have an impact in combat, as well as a lack of survival instincts. Without survival instincts, a machine would lack the “shoot-first, ask questions later” mentality and would not be affected by adrenaline or fear when making decisions.[12] This impact could be felt to a lesser degree if the system is not weaponized. If the global arms race has already begun, then investment in the technology will also develop better A.I. weapon defense systems and prevent a future technological disadvantage.

Downsides for developing these weapons are more numerous. A.I. weapons low cost leads to easy mass production and acquirement by unsavory parties. A single weaponized drone could be deployed by an individual with the right contacts, let alone terrorist organizations or hostile governments. These weapons, such as drones, are also ideally suited for acts of terrorism, dictatorial control, ethnic cleansing, and assassinations.[1] Software is also fallable and may come with built in biases from the producers. Amazon’s facial recognition software has notably worse accuracy when identifying women and people of color, and while Amazon recommends double checking the results with a confidence threshold, the Washington County Sheriff’s Office in Oregon, an identified customer of Rekognition, said “we do not set nor do we utilize a confidence threshold”.[13] Using code to govern weapons also increases the risk of coding errors or hacking. When A.I. fails, the issue of accountability also arises, and is discussed later.

Historic Comparisons: Regulation

[edit | edit source]

“Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.” [1]

London Naval Treaty 1930

[edit | edit source]

The U.S. helped negotiate the London Naval Treaty after World War I to ban unrestricted submarine warfare on civilian ships. The practice has parallels to autonomous weapons as it was considered a futuristic weapon with devastating effects at the time of the treaty. The treaty was signed by all major powers. After the Pearl Harbor attack, it took only 6 hours for the United States to violate the 11-year-old treaty to attach Japan’s merchant fleet.[14] The practice was generally used by all combatants in World War II.[15] For autonomous weapons, would a treaty remain practical after the technology is no longer futuristic? Would countries continue to respect a treaty banning autonomous weapons if their adversaries, including non-government groups, use these weapons?

Asilomar conference on Recombinant DNA 1975

[edit | edit source]

The Asilomar conference is a more successful example of international regulation. In 1975 a group of 140 professionals including biologists, lawyers, and physicians came together to discuss guidelines to ensure the recombinant DNA technology was used safely. Prior to the conference, many biologists had halted experiments out of fear of the potential dangers. The guidelines created at this conference allowed scientists to continue their research safely which increased both public interest and knowledge about life processes. This conference sets a precedent of the people creating the technology being involved in developing regulation.[16]

Ethical Considerations

[edit | edit source]

The key issue regarding the ethics of autonomous weapons is who to blame when things go wrong. “Go wrong” here meaning attacking/killing a target erroneously or attacking a civilian.

Responsibility

[edit | edit source]

There’s a complicated chain of command, from the beginning of development to the final firing of the autonomous weapon. For human in/on the loop weapons, there is the person overseeing the weapon, if the weapon misfires and they don’t prevent it is it their fault? Or is it the fault of the programmer(s) who built a faulty recognition algorithm that caused the weapon to fire erroneously? What if the company developing the algorithm was under a time/money constraint, causing the manager of the programmers to push them to make shoddy algorithm?
Further questions of responsibility comes from just how easily replicable autonomous weapons are once they’re out in the public. If the researchers’ predictions are to be trusted, then autonomous weapons will be the next revolution in warfare, following gunpowder and nuclear weapons. Having such an easily copied weapon that can be so dangerous heightens the question of who to blame.
Because responsibility is so blurred regarding autonomous weapons, the AI researchers co-signing the letter have taken it upon themselves to take no part in the further research of autonomous weapons.

Professional Ethics

[edit | edit source]

Engineers entering this space may be wondering what action they can take to prevent escalation. Google engineers may provide a good example, as when it was leaked that Google was helping the US government on “Project Maven.” The project was utilizing Google’s image recognition software for use in autonomous weapons. When this information was leaked, there was a massive protest by engineers within the company, which eventually pressured Google to pull out of the deal.

Conclusion

[edit | edit source]

As a novel technology, autonomous weapons and A.I. research has the potential to greatly benefit humanity or greatly harm it. The authors and signatories of the Open Letter on Autonomous Weapons believe that unrestricted A.I. development will be disastrous on a global scale no matter the intentions behind its creation. They campaign for a ban on offensive autonomous weapons, but the ease of manufacture and potential for effective, lethal tactics of these weapons may make that impossible.

Future exploration could include examination of the potential for these weapons to be hacked or stolen, the future of autonomous weapon technology including defense, and the current global status of the technology.

References

[edit | edit source]
  1. a b c d e f g h Open Letter on Autonomous Weapons. (2015, July 28). from Future of Life Institute website: https://futureoflife.org/open-letter-autonomous-weapons/
  2. Odom, C. (2018). Facial Recognition Nerf Gun with Robotic-Controlled Autonomous Aiming Take 2 by Kairo 2018. from https://www.youtube.com/watch?v=5OiBJ2UivAs
  3. Linus Tech Tips. (2016). DIY Autonomous Nerf Turret. from https://www.youtube.com/watch?v=Xz5ZvW98HRs
  4. Hacker Shack. (2016). How to Make a Raspberry Pi Motion Tracking Airsoft / Nerf Turret. from https://www.youtube.com/watch?v=HoRPWUl_sF8
  5. a b Rosenberg, M., & Markoff, J. (2016, October 25). The Pentagon’s ‘Terminator Conundrum’: Robots That Could Kill on Their Own - The New York Times. The New York Times. from https://www.nytimes.com/2016/10/26/us/pentagon-artificial-intelligence-terminator.html?module=inline
  6. Bumiller, E. (2010, April 5). Video Shows 2007 Air Attack in Baghdad That Killed Photographer - The New York Times. The New York Times. from https://www.nytimes.com/2010/04/06/world/middleeast/06baghdad.html?_r=0&module=inline
  7. Shapiro, A. (2018, April 24). Autonomous Weapons Would Take Warfare To A New Domain, Without Humans. https://www.npr.org/sections/alltechconsidered/2018/04/23/604438311/autonomous-weapons-would-take-warfare-to-a-new-domain-without-humans
  8. Warren, W. (2018, April 4). The Atomic Bomb - The Weapons That Changed The World. https://www.forces.net/radio/atomic-bomb-weapons-changed-world
  9. a b Killicoat, Phillip. (2006). Weaponomics: The Economics of Small Arms. Centre for the Study of African Economies, University of Oxford, CSAE Working Paper Series.
  10. Trex, E. (2011, April 07). What Made The AK-47 So Popular? from http://mentalfloss.com/article/27455/what-made-ak-47-so-popular
  11. Oxfam. (2006, June 26). The AK-47: The world's favourite killing machine. https://www.oxfam.de/system/files/20060623_theak47_200kb.pdf
  12. a b Etzioni, A., & Etzioni, O. (2017, May). Pros and Cons of Autonomous Weapons Systems. Retrieved May 5, 2019, from Army University Press website: https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2017/Pros-and-Cons-of-Autonomous-Weapons-Systems/
  13. Quach, K. (2019, February 3). Oh dear! Amazon’s facial recognition is racist and sexist – and there’s a JLaw deep fake that will make you want to tear out your eyes. from The Register website: https://www.theregister.co.uk/2019/02/03/ai_roundup_010219/
  14. Rosenberg, M., & Markoff, J. (2016, October 25). The Pentagon's 'Terminator Conundrum': Robots That Could Kill on Their Own. https://www.nytimes.com/2016/10/26/us/pentagon-artificial-intelligence-terminator.html?module=inline
  15. Hickman, K. (2017, March 06). Unrestricted Submarine Warfare. https://www.thoughtco.com/unrestricted-submarine-warfare-p2-2361020
  16. Berg, P., Baltimore, D., Brenner, S., Roblin, R. O., & Singer, M. F. (1975). Asilomar conference on DNA recombinant molecules. Nature,255(5508), 442-444. doi:10.1038/255442a0