Today’s post comes from Penn Press’s Marketing Operations Manager, Tracy Kellmer, continuing a blog series in which Tracy explores one of the books currently for 75% off in the Franklin’s Faves section of our website! The Franklin’s Faves selection will rotate on September 3, so you have one month to pick up your copy of today’s featured book, Fighting Machines by Dan Saxon, or any of the other great books currently available!
One of the perks of working at a university press, and especially Penn Press, is that I am exposed to excellent scholarly treatments of a wide variety of subjects. As a member of the Marketing department, I get to know a bit about every book we publish through my engagement with its metadata, such as keywords and subject codes, and its descriptive copy, which appears in everything from seasonal catalogs to online retail sites (we know the one you’re thinking of). But a marketer has short-lived romances with books on the front list: every six months I have a new season’s worth of books to get to know and a new seasonal catalog to publish. That’s why I love Franklin’s Faves! I get a chance to rediscover a book that intrigued me the first time around. I hope you enjoy my rediscoveries as much as I do!
What I picked:
Fighting Machines: Autonomous Weapons and Human Dignity by Dan Saxon
Why I picked it:
The idea that technological innovation is driven by the purveyors of food, sex, and death is practically a truism by now. In pop culture, the 1985 movie Real Genius and the more recent TV show The Big Bang Theory made sly comedy from the super-smart scientists who manage to achieve breakthrough technological innovation—with funding from the US government—while remaining clueless about its use as a weapon. Today, artificial intelligence is everywhere in our media landscape as the “newest” thing and, naturally, I wondered how, and for how long, it’s already been in use as a weapon. Fighting Machines, by Dan Saxon, was published in December of 2021, which means the research was finished and the manuscript submitted to us sometime in 2020. Which also means that AI has been a weapon for at least five years already, and I wanted to learn more.
What I discovered:
The US Department of Defense defines an autonomous weapon as one that “can select and engage targets without further human input after activation.” The International Committee of the Red Cross distinguishes an autonomous weapon as one in which “the exact timing, location, and/or nature of the attack is not known to the user since it is self-initiated by the weapon, which is—in turn—triggered by the environment.” These definitions are provided by Saxon in the Introduction, where he also characterizes his book as a “predictive legal history.” That is, his aim is to anticipate how these kinds of weapons may be used in the future without running afoul of international law. Especially since the decision to kill a person may be transferred from a human being “to the artificial intelligence that controls lethal autonomous weapons systems [LAWS],” given the speed with which these weapons can deploy and engage in a volatile environment.
However, and not surprisingly, the cat is already out of the bag. There are ground-based LAWS currently already in use by South Korea and Israel, and Russia is developing a robot capable of destroying stationary or moving targets. The United Kingdom’s Royal Air Force used a missile that fits the definition for a LAWS in missions over Afghanistan, Syria, Iraq, and Lebanon. The US Air Force and Navy both have “swarms” of unmanned autonomous aerial vehicles designed to overwhelm an enemy. Israel has autonomous bombs that can be launched from fighter planes. The US Navy has a swarm of thirteen autonomous boats, each “directed by artificial intelligence software—originally developed for the Mars Rover—called Control Architecture for Robotic Agent Command and Sensing (CARACAS), which allows it to function autonomously as part of a swarm and react to a changing environment.” The above are just a sampling of what Saxon listed—and remember, this book was published in 2021. Anyone out there also feeling that Skynet maybe isn’t merely science fiction anymore?
So now that we know what LAWS are, and that they already exist, how would their use break international laws? Saxon devotes a chapter to each of three kinds of international law, with respect to LAWS: humanitarian law, which refer to the rules of engagement or war; international human rights law, which covers states’ treatment of its own citizens, whether when enforcing the law or rooting out terrorism; and international criminal law, which holds individuals and/or states accountable for actions deemed to be crimes, such as genocide or slavery. Saxon thinks through examples of the use of LAWS in war, law enforcement, and criminal activities and makes a convincing argument that if a LAWS takes a human life as a result of a decision made by the software, and not by the person who activated or deployed it, then an international law has been broken.
Saxon demonstrates how international law is based on the concept of human dignity. He traces how the idea of human dignity was defined by Western philosophers such as Locke, Rousseau, and Kant, and how human dignity was crucial to treaties, conventions, and constitutions, including the Preamble to the Charter of the United Nations. Through his reading of these sources, dignity emerges from the same source that makes a human a human, which is our capacity to be a “moral agent.” If a person is robbed of the decision to make a right or wrong action, or robbed of the choice to not act in a situation when the consequences of not acting can be right or wrong, then they are in fact robbed of their dignity, their very human-ness. International law exists to protect the fundamental worth and dignity of each and every human being on earth.
In the case of LAWS, if the operator of a LAWS presses a button and someone dies as a result of the machine’s thinking, and the operator does not have the opportunity to target, capture, or override at any point, then how is the operator still a human? International law should be protecting the operator’s right to be a moral agent, to assume the decision, the responsibility, for those lives that may be lost as result of the LAWS action. The life or death decisions made by the LAWS render the operator an automaton, a zombie. Even if deploying LAWS in this manner “saves lives” in the short term, in the long term, transferring the decision to take a human life to an artificial intelligence program diminishes human dignity and therefore cheapens humanity’s worth writ large.
Saxon’s diagnosis is that LAWS simply compute and communicate too fast for any human to “keep up” or “stay in the loop.” In other words, the speed of technology is the problem. He recommends that to respect human dignity and to not break international law, human involvement must be included in every step of the process in the design, development, procurement, and deployment of LAWS, and that there always be a manual override available at any and all times. Obviously this will negate the speed of LAWS, but not their other advantages. And it would ensure a human-machine interdependent relationship distributed across many points of decision-making and protect the only human life worth protecting: one with dignity.
Favorite bit:
My favorite part of the book was Dan Saxon’s pragmatism. Not only was he incredibly knowledgeable about philosophy, technology, and international law, but he also had so much experience in dealing with human nature and real-world international crimes with the United Nations, and he was neither Luddite nor techno-optimist, neither misanthropic nor pollyannaish. International law is not perfect, but we are faced with a potential threat to humanity, on a global scale, for which ethics and policy lag far behind the technological advancement and deployment. International law requires merely that states and international actors and organizations agree to develop and enforce the kinds of safeguards Saxon was recommending. It was quite clear to me that he presented a feasible, if difficult, method to ensure we don’t end up with Skynet. International cooperation prevented a global nuclear apocalypse thus far, so given Saxon’s clear-eyed assessment, diagnosis, and real-world experiences, I was not nearly as despairing at the end of the book as I thought would be, given how it started.