Blogs
Thoughts on Human-Machine Teaming, AI and Changing Warfare… (Part II)
by : Ben ZweibelsonOriginal post can be found here: https://benzweibelson.medium.com/thoughts-on-human-machine-teaming-ai-and-changing-warfare-part-ii-a5af56730ebb
The Event Horizon and Technological Revolutions: Breaking the Paradigm
This is potentially the last century in a massive string of centuries where humans are the primary decision-makers and actors on battlefields. Figure 1 and 1a represent what will be a gradual shift from human operators being central to decision-making (in the loop) to an ancillary status (on the loop) and subsequently to a reactive, even passive status (off the loop, behind the loop) as technological developments influence future battlefields to be unsafe for human decision speeds as well as the presence of human combatants.[1] Already, unmanned aviation, armored vehicles and robots for a range of tactical security applications are in service or development that will replace more human operators with artificial ones that move faster, function in dangerous contexts, and are expendable with respect to the loss of human lives. While current ethical debates pursue where the human must remain “in the kill chain” for decisions of paramount importance, this assumes that the human still possesses superior judgement, intelligence, or other cognitive abilities that narrow AI systems cannot replace. If the coming decades bring forth advances in general AI intelligence to rival or exceed even the smartest human operators, those ethical concerns will be eclipsed by new ones.
Even the best human operator has biological, physical and emotional limits that cannot be enhanced beyond a certain limit. Hypersonic weapons and swarm maneuvers of many AI machines pose a new threat, coupled with the increased speed of production and replacement through advances in 3D printers, cloud networks, constellations of smart machines and more. The natural-born human can be modified through genetics, cybernetic enhancement, and/or a human-machine teaming with AI systems to produce a better hybrid operator team.[2] For the coming decades, this likely will be the trend.[3] Yet as Figure 2 presents below, the legacy human-machine teaming relationships framed in Figure 1 will be replaced.
There are two significant transformations that may render most of Figure 1’s human-machine teamings obsolete (see part I of this series linked above). These concepts are theoretical, and likely many decades away if even possible. The first is one where natural born, ‘standard’ humans are insufficient to participate in future decision-action loops where the enhanced human (genetically, cybernetically, and/or through AI integration) becomes a Supra sapien that outperforms any regular human opponent in every possible battlefield measurement. A popular term of ‘transhumanism’ covers this overlap between technological advancement and the modification of human beings to break free of the natural, slow evolutionary process. Transhumanism need not be directly associated with the rise of superior AI, as the two might be better understood in a Venn diagram influencing one another.[4]
Technophiles suggest that at some advanced level, modified humans will reach a point where a ‘singularity’ occurs and suddenly, these modified humans will exist on a new plane of reality that would arguably be inaccessible to natural, legacy human beings- as transhumans would shed all organic habits and potentially become beings of pure information[5] This ‘transhuman’ leap is illustrated below on the left side. The other extreme transition is depicted below on the right side and is where advanced AI reaches and then vastly exceeds general human intelligence. Alternatively termed ‘strong AI’, it remains entirely theoretical but is anticipated by virtue of superiority to human cognition to be uncontrollable once developed by leading AI theorists today such as Ray Kurzweil.[6] Such a superior entity would potentially convince societies (by reason, force or manipulation) that it should lead and decide for all matters of importance, to include defense.
The concept of an AI singleton comes from Nick Bostrom, who explained that a singleton is an entity that becomes the single decision-making authority at the highest level of human organization. This usually means the entirety of human civilization, often confined to Earth for the near term. Such an entity is considered “a set with only one member,”[7] where presumably a general AI entity that could rapidly advance from human-level intelligence beyond all biological, genetic or otherwise non-AI limits into a superintelligence well past any human equivalent.[8]This brings into question whether the arrival of a singleton would directly dismantle human free will, or the ability for humans to retain a status as a rational and biological species endowed with self-awareness.[9] The ‘singleton’ entity, as a super-intelligent AI entity would sit above legacy humans in a new decision loop that would take control of human civilization just as a suggested transhuman manifestation might. Both of the concepts in Figure 2 deserve greater explanation below.
Granted, a singleton could just be a fantastic individual or group that somehow manages to effectively take total control of human civilization (some future world government). However, in the entire history of human existence, this has yet to happen except in limited, isolated and temporary conditions that fall short of the singleton constant. Bostrom highlights the totality of what a real singleton would be with: “[a singleton’s] defining characteristic… is some form of agency that can solve all major global coordination problems. It may, but need not, resemble any familiar form of human governance.”[10] That no organic or natural singleton composed of one or several humans has yet in human history assumed any lasting form of a singleton indicates that for now, natural born Homo sapiens are incapable of such a feat. This may change with the rise of strong AI that can gaze well past existing cognitive limits. An entity that can outperform the best and brightest humans in every conceivable way, in any contest, at fantastic speeds and scale would seem either magical or God-like. Such concepts seem ridiculous, but many involved in AI research forecast these developments as increasingly unavoidable and increasing exponentially over the next decades.[11]
The singleton as depicted in the above figure offers the profound possibility that this entire shared socially constructed notion of war could be shattered and eclipsed by something beyond our reasoning and comprehension. Regular (narrow) AI may challenge both the character and nature of future war,[12] while a super intelligent singleton might break it completely. All humans would be “under the loop” in a Singleton relationship where the Singleton assumes all essential decision-making that governs and maintains the entire human civilization. This may in turn change what we comprehend as both ‘human’ and ‘civilization’, or possibly in existential ways, how Homo sapiens remain a recognizable and surviving species.
The other component in Figure 2 is the transhuman loop where a transhuman (or more than one) become the loop just as a singleton assumes total decision-making control. A transhuman extends a related concept of a ‘singularity’ that overlaps with a singleton in some ways while differing in others. A singularity, first introduced by mathematician Vernor Vinge and made popular in science fiction culture by Ray Kurzweil, will break with the gradual continuum of human-technological progress with an entirely new stage in human existence.[13] It is considered a game-changing, evolutionary moment where regular Homo sapiens would transform into a super-intelligent, infinitely enhanced and possibly non-biologically based technological fused entity.[14] Transhumanism envisions “our transcending biology or manipulating matter as a necessary part of the evolutionary process.”[15] The arrival of a technological singularity coincides with a rapid departure of the transhuman entity away from the original biological evolutionary track.
A singularity introduces the concept of ‘transhumanism’ where at a biological, physical, political, sociological and ultimately a philosophical level, humanity should break the slow evolutionary barriers and leap beyond the slow, clunky genetic and environmental soup of existence that changes organic life over thousands of years. The singularity could do this in minutes or days instead, depending on the degree of modification or enhancement. Or as Vernor Vinge explains, “biology doesn’t have legs. Eventually, intelligence running on nonbiological substrates will have more power.”[16]
In both circumstances in Figure 2, unmodified, natural-born humans would remain below the decision loops, becoming wards of a transhuman ‘supra sapien’ protectorate or a singleton super-intelligent artificial entity. Assuming of course that humanity would be kept in some sort of existence and contribute something to this new ordered reality, all decision loops for essential strategic or security affairs would become as Figure 2 illustrates. Modified humans, whether genetically, cybernetically, or otherwise that achieve a transhuman state would assume the decision loop and advance it in terms of speed, scale and scope beyond anything a regular human could understand or participate in. This is where Musk’s warning that human thought and communication would be so slow it would sound like elongated, simplistic whale songs to entities with super intelligent abilities. Unlike the hypersonic missile dynamic where the weapon moves faster than humans can respond to the threat, a transhuman or singleton entity comprehends thought as well as achieves action beyond the limits of even the smartest, fastest human operator.
Technological development, as anticipated over the next century or less, may achieve either a singularity where human beings and their technological tools form a new transhuman entity and arguably, take concepts such as species and existence toward uncharted areas, or pure general AI may quickly pull itself up by its own bootstraps into a level of existence beyond the comprehension of its human programmers. If either is a potential reality in the decades to come, how will war change? What might future battlefields become? What roles, if any might human adversaries assume in such a transformed reality? Could humanity be doomed, or potentially enslaved by technologically super-enabled entities that dehumanize societies of regular, natural humans? Could one nation unencumbered by ethical or legal barriers unleash superior AI abilities with devastating effects?[17] What emergent dangers lurk beyond the simplistic planning horizons where militaries contemplate narrow-IA swarms of drones for future defense needs in the 2030s and 2040s?
Whale Songs of Future Battlefields: The Irrelevance of Natural Born Killers…
Previous efforts by military theorists on how technological developments will ‘change warfare fundamentally’ have rung hollow, whether one considers the development of the stirrup, firearms, the Industrial Revolution or even the First Quantum Revolution and the detonation of atomic weapons in 1945. Throughout all of these transformative periods, warfare characteristics, styles and indeed the scale and scope of war effects have changed, yet humans remained the sole decision makers and operators central to all war activities. The key distinction in how advanced (likely general AI) intelligent machines and/or human-machine teamings (the rise of transhumanism) may develop is that future battlefields may push human decision-makers and even operators to the sidelines. While the targets of such a war may still include human populations and nations, the battlefield itself may finally become untenable for natural-born human cognition, survivability and capability.
For several centuries, modern militaries embrace a natural science inspired ordering of reality where war is defined in Westphalian (nation-state centric), Clausewitzian (war is a continuation of politics) and what Der Derian artfully termed the ‘Bacion-Cartesian-Newtonian-mechanistic’ model. [18] War has been interpreted along with a mechanical canonization of manufacturing, navigation, agriculture, medicine and other ‘arts’ in medieval and Renaissance thinking; Bacon’s “patterns and principles,” both early synonyms for rules, “emphasized that such arts [including military strategy] were worthy of the name.”[19] Daston goes on to explain:
The specter of Fortuna haunted early modern treatises like Vauban’s on how to wage war. In no other sphere of human activity is the risk of cataclysmic chaos greater; in no other sphere is the role of uncertainty and chance, for good or ill, more consequential. Yet many early modern treatises that attempted to reduce this or that disorderly practice to an art, none were more confident of their rules than those devoted to fortifications. This was in part because fortifications in the early modern period qualified as a branch of mixed mathematics (those branches of mathematics that “mixed’ form with various kinds of matter, from light to cannonballs). Like mechanics or optics, it was heavily informed by geometry and, increasingly, by the rational mechanics of projectile motion.[20]
Modern military decision-making remains tightly wedded to what is now several centuries worth of tradition, ritualization, and indoctrination to framing warfare in a ‘hard science’ inspired, systematic logic derived construct where centers of gravity define strengths and vulnerabilities universally in all possible conflicts, just as principles of war such as ‘mass, speed, maneuver, objective, and simplicity’ are considered “the enduring bedrock of [U.S.] Army doctrine.”[21] This comes from the transformational period where a feudal age military profession sought to modernize and embrace social, informational and political change that accompanied significant technological advances.[22] Daston adds to this with: “In the seventeenth and eighteenth centuries, the most universal and majestic of all laws were the laws of nature such as those formulated by the natural philosopher Isaac Newton in his Philosophiae naturalis principia mathematica in the late seventeenth century and the natural laws codified by jurists such as Hugo Grotius (1583–1645) and Samuel Pufendorf (1632–1694) in search of internationally valid norms for human conduct in an age of global expansion.”[23] Natural sciences as professions would lead this movement, with medieval oriented militaries quickly falling into step.
The reason for this brief military history lesson is that today’s modern military that currently integrates and develops artificial intelligence with human operators still holds to this natural science ordering of reality including warfare. War is framed through human understanding and nested in both a scientific (natural science) and political (Westphalian nation-state centric) framework. This in turn inspires nearly everything associated with modern war, including diplomacy, international rules and laws for warfare, treaties, declaration of war, the treatment of noncombatants, neutrality, war crimes and many other economic, social, informational and technical considerations.
Central to our shared understanding of war is the human decision-maker and human operators that inflict acts of organized violence upon adversaries in precise, ordered and what is ultimately a socially governed manner. Even when strong deviation occurs in war, such actions are comprehended, evaluated and responded to within a human overarching framework. The human decision-maker as well as all operators cognizant of any action in war are held responsible, such as in the Nuremberg Trials held against defeated Nazi Germany military representatives in 1945–1946. Never before has this dynamic of ‘human centeredness’ been challenged until now, where the role of artificial intelligence and humans on the battlefield are already entering shaky ground.
An autonomous weapon system, granted full decision-making abilities by human programmers, presents an ethical, moral and legal dilemma on whether it, its programmers or its human operators should be held responsible for something such as a war crime or tragic error during battle.[24] Current AI systems remain too narrow (in terms of AI intelligence), fragile and limited in application to yet reach this level, for now at least. Increasingly powerful AI will in the coming decades replace human operators and in some aspects, even the decision-makers. Humans, unless enhanced significantly, will become too slow and limited on battlefields where only augmented or artificial intelligence can move at the speeds, scale and complexity necessary. Many of the ‘natural laws of war’ will be broken, or rendered irrelevant in these later and more ethically challenging areas of AI systems with general intelligence equal or beyond that of the human programmers. The strategic abilities of non-human entities might exceed the comprehension and imagination of how humans for centuries have defined what war is… the arrogance that human strategists several centuries ago figured out the true essence of war in some complete, unquestionable way is but one institutional barrier preventing any serious discussion on what is to become of natural born operators and decision-makers on future battlefields.
The last cavalry charge occurred at least one war too late to make any difference, while many technologically inferior societies encountered horrific losses attempting traditional war tactics and strategies against game-changing developments.[25] Arguably, with enough numbers, adversaries wielding significantly inferior weaponry can overcome a small force equipped even with game-changing technology, as the 25,000 Zulu warriors did defeat 1,800 British and colonial troops at the Battle of Isandlwana (January 22, 1879). Yet the Zulu offensive largely armed with iron-tipped spears and cow-hide shields lost several thousand warriors before eventually overwhelming their Martini-Henry breech loading rifle and 7-pounder mountain gun equipped opponents. Nuclear weaponry may have shifted war toward intentionally limited engagements between nuclear armed (or partnered) adversaries since the 1950s, but even this nuclear threshold may be in question. Yet military institutions typically resist change, and instead are often dragged, kicking and screaming, into the adaptation of new technology while they attempt to extend the relevance of those things that they identify with but are no longer relevant in battle.[26]
The last natural born, genetically unmodified and non-cybernetically enhanced human battlefield participants are not realized yet, nor is the future battlefield selected. However, whenever and wherever that happens, those human operators will not win, and likely they will suffer horribly. Nearly a century ago, the first portable metal detector for landmines were invented in Poland and gifted to the British Eighth Army in 1941. The human operator with the land mine detector provides a useful early human-machine teaming, yet still one where the human is central to the relationship. The future appears to flip the relationship, shifting humans to the role of the mine detector, where the human is ‘on the loop’ or ‘off the loop’, moving too slowly and unable to conceptualize or act in a battlefield context where AI systems are swarming, networking, and engaging at speeds unreachable by the fastest human operator. Yet there is today a fierce resistance to handing over significant decision-making to machines in military culture.[27] Part of this deals with control and risk, while the way militaries maintain identity, belief systems and values also factor into how AI technology is being developed.
This presents an interesting change in future war as presented by Der Derian. While he focuses upon technology, information and human perception therein, he defines a virtuous war as “the technical capability and ethical imperative to threaten and, if necessary, actualize violence from a distance- with no or minimal casualties.”[28] Der Derian frames the origin of virtuous war in the technological ramp-up and eventual Gulf War engagements between the U.S. and allies against Saddam Hussein’s Iraqi forces that had forcibly invaded and occupied neighboring Kuwait. Stealth aviation, smart bomb precision along with grainy video feeds of enemy targets being struck saturated the news cycles, along with offering the promise that future wars would be largely bloodless, with few civilian casualties and low risk to friendly forces using such game-changing technology. War would become technologically rationalized so that humanitarian aspirations of deterrence and peaceful resolution could flourish, and any necessary organized violence would be hygienic, precise and rapid.
Virtuous wars have, according to Der Derian, closed the gap between an imagined or fantasized world of televised war and video game simulations with the gritty, brutal and harsh reality of actual war. “New technologies of imitation and simulation as well as surveillance and speed have collapsed the geographical distance, chronological duration, the gap itself between the reality and virtuality of war.”[29] Der Derian sees with this arrival of virtuous war the collapse of Clausewitzian war theory, the demise of the traditional sovereign state, “soon to be a relic for the museum of modernity… [or] has it virtually become the undead, haunting international politics like a spectre?”[30] Der Derian addresses human social construction of reality and whether the hyper-information, networked and technologically saturated world of today is drifting toward a new era of struggling between the virtual and the real, the original and the copy, as well as the copy and the constructed illusion that has no original source at all.[31] To reapply Der Derian’s construct toward this AI future transformation of war, will the removal of humans both from operations as well as decisions in warfare create the final exercise in virtuous war… perhaps the last gasp of humanity into what war has been previously?
The total removal of humans from battlefields presents many emergent dilemmas ranging from accountability, ethical as well as legal responsibilities, and how intelligent machines can and will interact with human civilians on such future battlefields. Sidestepping the transhuman question for now, a purely artificial battlefield extending upward into all strategic and command authority decisions would move humans not just off or behind the loop, but under it. Would future wars even appear recognizable or be rationalized in original concepts? Might super-intelligent machines be capable not just of winning future wars as defined by human programmers, but imagining and bringing into reality different forms and functions of warfare that are entirely unprecedented, unrealized and unimagined?
[2] Robert Geraci, “Apocalyptic AI: Religion and the Promise of Artificial Intelligence,” Journal of the American Academy of Religion 76, no. 1 (2008): 154–56.
[3] Wilczek, “The Unity of Intelligence,” 74.
[4] Shatzer, “Fake and Future ‘Humans’: Artificial Intelligence, Transhumanism, and the Question of the Person,” 133–34.
[5] Kevin Shapiro, “This Is Your Brain on Nanobots,” Observations, December 2005, 64–65.
[6] “Merging With the Machines: Information Technology, Artificial Intelligence, and the Law of Exponential Growth, An Interview with Ray Kurzweil, Part 2,” World Future Review 2, no. 2 (May 2010): 59.
[7] Nick Bostrom, “What Is a Singleton?,” 2005, https://nickbostrom.com/fut/singleton.
[8] Bostrom, Superintelligence: Paths, Dangers, Strategies, 26.
[9] Aura-Elena Schussler, “Artificial Intelligence and Mind-Reading Machines- Towards a Future Techno-Panoptic Singularity,” Postmodern Openings 11, no. 4 (2020): 342.
[10] Bostrom, Superintelligence: Paths, Dangers, Strategies, 100–101.
[11] “Merging with the Machines: Information Technology, Artificial Intelligence, and the Law of Exponential Growth, An Interview with Ray Kurzweil, Part 1,” World Future Review 2, no. 1 (March 2010): 61–66; “Merging With the Machines: Information Technology, Artificial Intelligence, and the Law of Exponential Growth, An Interview with Ray Kurzweil, Part 2”; Bostrom, “What Is a Singleton?”; Goertzel, “The Singularity Is Coming.”
[12] Benjamin Jensen, Christopher Whyte, and Scott Cuomo, “Algorithms at War: The Promise, Peril, and Limits of Artificial Intelligence,” International Studies Review 22 (2020): 529; Denise Garcia, “Lethal Artificial Intelligence and Change: The Future of International Peace and Security,” International Studies Review 20 (2018): 334.
[13] Maxim Shadurski, “The Singularity and H.G. Wells’s Conception of the World Brain,” Brno Studies in English 46, no. 1 (2020): 229.
[14] Justin Pugh, Lisa Soros, and Kenneth Stanley, “Quality Diversity: A New Frontier for Evolutionary Computation,” Frontiers in Robotics and AI 3, no. 40 (July 2016): 1–3; Shapiro, “This Is Your Brain on Nanobots,” 64–65; Shadurski, “The Singularity and H.G. Wells’s Conception of the World Brain,” 229.
[15] Pugh, Soros, and Stanley, “Quality Diversity: A New Frontier for Evolutionary Computation,” 5.
[16] “Science Fiction as Foresight: An Interview with Vernor Vinge,” Research-Technology Management, February 2017, 13.
[17] Arthur Herman, “Why China Is Winning the War for High Tech,” National Review, November 1, 2021, 32–34; Chris Demchak, “China: Determined to Dominate Cyberspace and AI,” Bulletin of the Atomic Scientists 75, no. 3 (2019): 99–101.
[18] Der Derian, “Virtuous War/Virtual Theory,” 786.
[19] Daston, Rules: A Short History of What We Live By, 76–77.
[20] Daston, 63.
[21] United States Army, Field Manual 3–0, Operations (Washington, DC: Headquarters, Department of the Army, 2001), 4–12, https://www.bits.de/NRANEU/others/amd-us-archive/fm3-0%2801%29.pdf.
[22] Aaron Jackson, The Roots of Military Doctrine: Change and Continuity in Understanding the Practice of Warfare (Fort Leavenworth, Kansas: Combat Studies Institute Press, 2013). Chris Gray, Postmodern War: The New Politics of Conflict (New York: The Guilford Press, 1997), 23; Barry Posen, The Sources of Military Doctrine: France, Britain, and Germany Between the World Wars (Ithaca, New York: Cornell University Press, 1984), 52. [22] Siniša Malešević, “The Organization of Military Violence in the 21st Century,” Organization 24, no. 4 (2017): 456–74. Felix Gilbert, “Machiavelli: The Renaissance of the Art of War,” in Makers of Modern Strategy: From Machiavelli to the Nuclear Age, ed. Peter Paret (Princeton, New Jersey: Princeton University Press, 1986), 11.
[23] Daston, Rules: A Short History of What We Live By, 151.
[24] Noreen Herzfeld, “Can Lethal Autonomous Weapons Be Just?,” Journal of Moral Theology 11, no. Special Issue 1 (2022): 70–86.
[25] This does not discount the strong historic pattern of low-technology, low-resource opponents able to overcome and defeat high-technology, well-resourced adversaries. The successes of the Viet Cong, Taliban, Barbary Pirates and decentralized groups such as the Earth Liberation Front against ‘Superpower’ status nations should not be overlooked.
[26] Jensen, Whyte, and Cuomo, “Algorithms at War: The Promise, Peril, and Limits of Artificial Intelligence,” 536.
[27] Scharre, Autonomous Weapons and the Future of War: Army of None, 61.
[28] Der Derian, “Virtuous War/Virtual Theory,” 772.
[29] Der Derian, 774.
[30] Der Derian, 776.
[31] Jean Baudrillard, Simulacra and Simulation, trans. Sheila Glaser (Ann Arbor: The University of Michigan Press, 2001); Ben Zweibelson, “Preferring Copies with No Originals: Does the Army Training Strategy Train to Fail?,” Military Review XCIV, no. 1 (February 2014): 15–25; James Der Derian, “From War 2.0 to Quantum War: The Superpositionality of Global Violence,” Australian Journal of International Affairs 67, no. 5 (2013): 575, 582.