Blogs
Thoughts on Human-Machine Teaming, AI and Changing Warfare… (Part III)
by : Ben ZweibelsonOriginal post can be found here: https://benzweibelson.medium.com/thoughts-on-human-machine-teaming-ai-and-changing-warfare-part-iii-ce09ad7c5b30
The Calm Before the Storm: How it May Unfold
War has for thousands of years been a human design where as a species, Homo sapiens waged organized violence upon others of the same species in a manner unlike any other form of violence in nature. War is a human enterprise, until now. Advanced AI as well as the infusion of technology into how future humans exist will disrupt this history of violence. Genetically modified transhumans could gain unprecedented abilities in cognition, speed, strength and resistance so that future battlefields would be far too dangerous for unmodified, original Homo sapiens. These super soldiers might have cybernetic abilities, or work in tandem with swarms of autonomous and semi-autonomous war machines. However, the tension of whether human vulnerability and intolerance to loss of life could force even the modified humans off the battlefield if hardened machines could assume all necessary roles.
The kinetic qualities of future war are more readily grasped, with science fiction already over-saturated with depictions of terminator robots and swarm armies of smart machines hunting down any inferior human opponent. Where advanced AI and transhuman developments are less clear is in the non-kinetic, informational and social areas of warfare. In past wars including the recent fall of Kabul in 2021 by advancing Taliban forces, information campaigns have been critical to gaining advantages over more powerful opponents. The Taliban invested for years in an analog, grassroots influence campaign where they contacted Afghan security force personnel over phone calls, text messages, through social media and by local, in-person means to gradually win over their ability to actively resist. The Taliban waged a sophisticated information campaign that would in the summer of 2021 collapse Afghan security resistance faster than ever anticipated by the high-tech, sophisticated western security advisors planning the American withdraw. This momentous effort was done by Taliban operators in slow, largely analog efforts through point-to-point engagements. Similar endeavors are increasing worldwide such as Chinese disinformation efforts targeting the Taiwanese population, and a documented history of Russian disinformation activities against threats and rivals. Yet most ‘bot’ activity is easy to identify and generates marginal impacts. Narrow AI remains brittle for now, but changes are coming.[1]
Consider a transhuman or general AI entity that could find, engage and convincingly correspond not with one person at a time, but millions? How fast a human population could be targeted, saturated with messaging, and engaged in a convincing manner that would be indistinguishable from human conversation despite the status of an AI entity (or one transhuman engaging with far more targets) being deceptive or manipulative for security aims? In the rapidly developing areas of genetics, nanotechnology, viral and microbe technology applied to warfare, how might AI systems incorporate these into human programmed strategic and operational objectives? For human societies and civilization in general, might future wars with AI able to think and act at or above human performance levels produce both the most fantastic of advantages and, if one is the target of such power, the most devastating of threats?
In Figure 3, a systemic treatment is presented using a quadrant that positions on the vertical axis those that are witting or unwitting with a horizontal axis spanning the willing and unwilling. Witting humans understand the dynamic relationship between themselves and artificial intelligent entities (or transhuman ones) that together produce a decision-action loop. Those that are unwitting humans are unable to comprehend nor fully collaborate within the decision-action loop. Unwitting human participants simply are unable to conceptualize what is actually happening, whether due to speed, intelligence, multiplicity or other reason that the AI entity is deciding and acting beyond human limits. The horizontal axis offers the tension between humans willing to participate and maintain such relationships with AI entities (or transhuman ones) and those that will resist and oppose any movement toward such a dynamic.
Figure 3 provides a lateral drift illustrated with the red arrow where willing human enterprise with AI entities “in the loop” coincides with productive warfare and security outcomes. If human-machine teaming produces better military results, more of the human participants should accelerate a willingness to continue and strengthen such efforts. Conversely, adversaries that also use human-machine teaming in conflict successfully will force opposing humans into the unwilling direction. Adversarial AI effects such as ‘fooling’ rival human-machine teams will provoke further resistance. Outside the traditional battlefield dynamic of ‘us versus them’, humans that perceive human-machine decision-action loops as a growing hazard or danger will assume some of the neo-luddite positions and resist further investment in such technology. Impacts from war or conflict using AI systems may become beacons for technological activism to halt, prevent or reverse such activities.
The purple arrow in Figure 3 illustrates a progressive shift downward from the ‘witting-willing’ quadrant to one of ‘willing-unwitting’ where humans sit atop an increasingly sophisticated decision-action loop that has increasingly powerful AI. Over time, the human ‘on the loop’ will eventually morph out of the witting into the unwitting, where that human is ‘behind the loop’ of an ever-increasingly advanced AI system running the loop. The arrow is placed slighting into the unwilling side as this transformation will likely create resistance, concern and fear. The blue arrow occupying the bottom two quadrants reflects the rise of super-intelligent (in a general sense) entities that may be a transhuman extension beyond a singularity, or the rise of a pure AI singleton entity. In both instances, human civilization (unmodified, normal humans) would be subservient and in a protected or perhaps oppressed status. The blue arrow spans both the willing and unwilling quadrants as unmodified human populations could embrace or go to some existential war against such superior entities.[2]
One final take-away that Figure 3 provides is how the 21st century might be plotted upon the systemic framing shown. The red arrow portion could be applied to 2030 through perhaps the 2050–2070 period, depending on speed of AI enhancement and development toward general intelligence. The purple arrow might span the 2040–2080 period, while the bottom blue arrow might emerge in the 2075–2100 period. Modern warfare should remain approximate to contemporary understanding of organized violence and waged mostly by humans for the coming decades, although a gradual blending of humans and increasingly intelligence machines will become pronounced as the decades progress. While impossible to speculate when, the rise of a singularity or a singleton would spell the end of what has been over 40 centuries of a human-defined war paradigm. What would happen next is unfortunately outside of our imagination. The most likely outcome should be that whatever humans currently believe is appropriate and rational for warfare will be insufficient, irrelevant or inappropriate for what comes next.
The End is Nigh… Or Probably Not… But Possibly Worse! Or a Future Without Human War?
This article was developed as a thought piece oriented not on the near-term and immediate security concerns where new technology might make incremental impacts and opportunities. Rather, this long-term gaze addresses the emergent paths that exist beyond the direct focus areas of most policy makers, strategists and military decision-makers charged with defending national interests today. Humanity has over many centuries experienced a slow rate of change, accelerating exponentially in bursts where game-changing developments (fire, agriculture, writing, money-based economics, the computer) have ushered in profound change. Yet within much of that change, warfare has been a deadly contest between human populations equipped with varying degrees of technology and resources. The weapons were the means to human determined ends in conflict. New technology represented new means and increased opportunities for creative ways to inflict destruction upon one’s opponent. The next shift with advanced artificial intelligence will first unfold upon the next few decades of battlefields with faster decision-action loops, which will gradually shift humans to atop the loop, and then behind the loop. Lastly, the loop may become a new end to itself, detached entirely from human creators.
If the rise of advanced AI systems as well as new technological gains for human enhancement spell grave risks for humanity, might some lessons be found in organized resistance to nuclear arms and nuclear weaponized nations? In 1953, U.S. President Dwight Eisenhower created the “Atoms for Peace” program which attempted to ‘demilitarize’ the American international image as the first nation to use atomic weapons in war as well as the leading nuclear weaponized nation actively conducting live nuclear tests at the time.[3] This program gained the support of many scientists, and while met with initial skepticism by Soviet leadership, the USSR would soften this stance and begin to negotiate and participate on peaceful use of nuclear energy. In the 1960s and onward, multiple peace-oriented and anti-nuclear weapons groups, movements and programs gained influence across the world. This in turn inspired many nations that could invest in nuclear weapons to defer and seek alternatives.
While nuclear weapons development is not a perfect match to how advanced AI development (including autonomous weaponization initiatives) may progress, such a resistance and activist movement could perhaps deter or contain the general AI development that could lead to a singleton entity or postpone a dangerous ‘singleton arms race’ toward accomplishing the first one over adversaries.[4] Unlike nuclear weapons that are a means toward particular ends in foreign policy and defense, the singleton as well as any singularity that produces transhumans with super-intelligent abilities may quickly become their own ends in themselves. In such stark possibilities, a neo-luddism movement has already formed and could use such sensational science fiction fears to further their cause.
Unlike the English Luddites of the 19th century they are named for, neo-luddites are a decentralized, leaderless movement of non-affiliated groups and individuals that propose the rejection of select technology, particularly those that pose tremendous environmental threat and any significant departure from a simplistic, natural state of existence. Neo-luddites take a similar philosophical stance as anti-war and anti-nuclear groups where the elimination of harmful technology offers salvation for humanity as one species within the broader ecosystem of planet earth. Mathematician and conflict theorist Anatol Rapoport termed such movements ‘global cataclysmic’ where all war is harmful, and the prevention of any war is a necessary goal for all of civilization to pursue.[5] While Rapoport crafted his concept to frame mid-late Soviet conflict philosophy through this self-preservation of Marxist society, ‘global cataclysmic’ could be applied also to international entities such as the United Nations concerning conflict, and for environmental efforts a way to explain the radical positions of the eco-terror group ‘Earth Liberation Front’ in the 1990s in the American Pacific Northwest. Unlike a Westphalian or Clausewitzian war philosophy that permits state-on-state and other state-directed acts of policy and war, the global cataclysmic philosophy rejects all wars as dangerous to society. Neo-luddites would swap ‘war’ with ‘dangerous technology’ and foresee human extinction or a planet-wide disaster as a direct, foreseeable outcome of human tinkering with technology that could destroy the world as we know it. The term ‘Luddite’ is also misapplied where some of the most prominent AI developers such as Musk, Gates, Turing and others are lumped into this group because they call public attention to the concerns of AI and are raising them for entirely different purposes of what neo-luddites seek.[6]
Neo-luddism as well as potentially more radical and violent groups could posit a ‘global cataclysmic’ philosophy against advanced technology that would unavoidably include AI systems. The real possibility of a technological shift into a singularity or the arrival of a singleton entity capable of gaining total control of all defense systems presents existential concerns that correlate to the deepest aims of these movements. Existentially, the potential subjugation of regular human society under rule (protection or enslavement) of transhuman or singleton AI may trigger radical actions by some of these groups, particularly if technological developments accelerate in publically understood narratives. This presents additional security considerations for all nations that have advanced artificial intelligence research as well as efforts to integrate such developments into military forces and across civilian society. Those that advocate for transhumanism and achieving a singularity will be in fierce opposition with those proposing neo-luddism perspectives… with the middle ground increasingly scarce in such debates. Existential debate rarely fosters calm and collected discussions.
The widespread acts of economic sabotage, arson, and guerrilla warfare that defined the Earth Liberation Front’s most active period in the late 1990s and early 2000s caused the Federal Bureau of Investigation to declare the group the top ‘domestic terror threat’ in February, 2001.[7] The group is decentralized with no formal leadership or hierarchical structure, making it part of a pattern of rhizomic (no central form; cannot be isolated or reduced to impact the larger system) organizations that mitigate or even defeat the most powerful instruments of national power for traditional Westphalia-styled nation states. If there is to be active resistance toward perceived threats of transhumanism, singularities and singletons through continued technological advancement in the coming decades, it likely will manifest in one of the three aforementioned forms. Likely, all three will develop simultaneously.
Nation states will themselves pursue diplomatic and international norms, policies and treaties concerning efforts to contain the dangers of advanced artificial intelligence, technological enhancement of Homo sapiens toward some transhuman singularity that might be weaponized toward others, as well as specific concerns of narrow and general AI in warfare. There may be state sponsored as well as grassroots and decentralized movements to curb these technological developments, or efforts to contain them using the scientific community, commercial enterprise or social activism. However, efforts to ban technological development has a poor track record, and such attempts may only push technology experimentation underground or into nations that have no such objections, where it could be even more dangerous.[8]
Some groups that posit particular philosophical stances against technology will gain influence, especially if environmental and anti-war groups are successful in highlighting existential threats in these potential technological developments. Lastly, splinter groups and violent extremist movements have increasingly grown in numbers, impact and frequency with the rise of the globally connected, social media infused Information Age of modern society. That these groups might strike to attempt to halt (or in some cases, possibly accelerate) the arrival of transhumanism, the singularity or a singleton is a growing security concern for all. Hugo de Garis is one of the futurists that envisions a brutal such development, where: “the rise of AI will lead to war…He does not mind that a pitched war may lead to the destruction of the human race because he believes that “godlike” machines will survive afterward.”[9] These extreme positions for and against technological progress and AI present significant existential narratives that might instigate violent acts from either side of the divide.
Humanity today is at the edge of what could be transformative, liberating, destructive or eternally enslaving. Homo sapiens became the deadly marvel of all organic life on this planet through how they could manipulate reality and interplay between the conceptual in their minds with the real world at their fingertips. Yet now those minds and those fingers might produce outcomes that outpace the ability to manage, control and shape the future for not just those new things, but for everything. We are ill-equipped to think in exponential terms, preferring a linear-causal explanation of where tomorrow is going based on our collected analysis of yesterdays.[10] If AI development moves exponentially in the coming decades, we may be like the cavemen unable to see beyond the first tools of organized violence and that the spears and axes of the first battles would morph into submarines, tanks and stealth bombers. This is a built-in problem for organic humans and natural evolutionary processes, but likely not an issue for AI entities we may create.
Humans may create the perfect future world where every possible need is met, and virtually all risks and harms are reduced or eliminated for everyone. They might eliminate hunger, war and misery… or they unleash devastation unlike anything seen before. Yet the question remains on whether humans that create advanced entities (transhuman or singleton) will be a willing and witting participant in these new realities. There are moral, ethical and religious debates on whether a transhuman technological manifestation will enable a superior human or instead degrade the human original, spawning false copies that lack the original uniqueness and ‘humanity’ of the natural-born variant.[11] The same can be argued on a super-intelligent AI that reaches a singleton status and dominates a civilization of lesser ability human creators. Will it share any of the original ‘humanness’ that is essential for protecting and nurturing society beyond the current fragile state of affairs? Or will such developments plunge civilization into extinction, devastating wars, or some sort of organic servitude to new masters? Theoretical physicist and AI researcher Max Tegmark offers:
Perhaps artificial superintelligence will enable life to spread throughout the cosmos and flourish for billions or trillions of years, and perhaps this will be because of decisions we make here, on our planet, in our lifetime… Or humanity may soon go extinct, through some self-inflicted calamity caused by the power of our technology growing faster than the wisdom with which we manage it.[12]
There may be the promise of either of these two vastly different futures, as well as potential wars and devastation waged before either might be accomplished… waged either to prevent or encourage the arrival of one possible future or the other. More perplexing than all of this, the clever Homo sapiens may never really understand or grasp what might emerge, as the potential of superintelligence paired with future security challenges could be outside human limits of comprehension. In such a future battlefield, the only sounds heard might just be the fading chorus of whale songs slowly wheezing away into ignored nothingness. Giacomello’s similar warning reinforces this paramount crossroad we are approaching with: “All in all, maintaining human in the loop, let alone in the loop, may turn into a strategic (and deadly) disadvantage, or be strategically illogical. In the end, the loop is indeed too small for the both of them.”[13] This deadly dance of nature and the artificial may end in myriad tragic ways, including several where even the human ‘winner’ ends up losing what makes them decidedly human.
[1] Layton, “Fighting Artificial Intelligence Battles: Operational Concepts for Future AI-Enabled Wars,” 83.
[2] Geraci, “Apocalyptic AI: Religion and the Promise of Artificial Intelligence,” 157.
[3] Roman Khandozhko, “Quantum Tunneling Through the Iron Curtain,” Cahiers Du Monde Russe 60, no. 2/3 (September 2019): 370–71.
[4] Demchak, “China: Determined to Dominate Cyberspace and AI,” 99–101.
[5] Anatol Rapoport, “Editor’s Introduction to On War,” in On War, by Carl Von Clausewitz, ed. Anatol Rapoport (New York: Penguin Books, 1968).
[6] Stuart Russell, “The Purpose Put Into The Machine,” in Possible Minds: Twenty-Five Ways of Looking at AI, ed. John Brockman (New York: Penguin Press, 2019), 27.
[7] “Eco-Terrorism Specifically Examining the Earth Liberation Front and the Animal Liberation Front” (Washington, D.C.: U.S. Government Printing Office, May 18, 2005), https://www.govinfo.gov/content/pkg/CHRG-109shrg32209/html/CHRG-109shrg32209.htm.
[8] “Merging with the Machines: Information Technology, Artificial Intelligence, and the Law of Exponential Growth, An Interview with Ray Kurzweil, Part 1,” 64; Giacomello, “The War of ‘Intelligent’ Machines May Be Inevitable,” 282–83; Garcia, “Lethal Artificial Intelligence and Change: The Future of International Peace and Security,” 339.
[9] Geraci, “Apocalyptic AI: Religion and the Promise of Artificial Intelligence,” 157.
[10] Ben Zweibelson, “Linear and Nonlinear Thinking: Beyond Reverse-Engineering,” The Canadian Military Journal 16, no. 2 (2016): 27–35; Ben Zweibelson, “One Piece at a Time: Why Linear Planning and Institutionalisms Promote Military Campaign Failures,” Defence Studies Journal 15, no. 4 (December 14, 2015): 360–75; Ben Zweibelson, “An Awkward Tango: Pairing Traditional Military Planning to Design and Why It Currently Fails to Work,” The Journal of Military and Strategic Studies 16, no. 1 (2015): 11–41.
[11] Hohyun Sohn, “Singularity Theodicy and Immortality,” Religions 10, no. 165 (2019): 4–6; Shatzer, “Fake and Future ‘Humans’: Artificial Intelligence, Transhumanism, and the Question of the Person,” 137–39.
[12] Max Tegmark, “Let’s Aspire to More Than Making Ourselves Obsolete,” in Possible Minds: Twenty-Five Ways of Looking at AI, ed. John Brockman (New York: Penguin Press, 2019), 79.
[13] Giacomello, “The War of ‘Intelligent’ Machines May Be Inevitable,” 284.