AI warfare
Image source:
23 January 2023

Recommended Innovation Articles (and Commentary) 3: Algorithms at War: The Promise, Peril, and the Limits of Artificial Intelligence

by :

Original post can be found here:

Today’s recommended article is on AI, human-machine teaming, ethics, law and where the future perhaps extends on complex warfare with advanced artificial intelligence. This is a recent (2020) article and comes from a writing team out of the USMC and Quantico. The article is available for download at the link below but it is behind a paywall. Like my last post in this series, many have libraries (on bases or if in a university) and there are always options on how to gain access:

A few key take-aways for those with the time to read this:

The authors unpack how military (and political) bureaucratic structures, behaviors and culture will unavoidably shape, often slow or even warp how AI is integrated into warfighting. This topic might be the primary one for USSPACECOM and the U.S. Space Force, but also most every other service and Combatant Command to think about for the next decade of tech experimentation, acquisition, and simulation in war gaming. Frankly, we all are part of and sometimes stuck within that bureaucracy, and we all wittingly or unwittingly participate in how it is exercised. How might this influence how you think about AI in the future?

Algorithms that learn, sense, and help machines move through the battlefield have the potential to increase military power. Yet, states will wage algorithmic warfare from inside the labyrinth of the modern defense bureaucracy, and the resulting plans and battles will still involve human judgment, even if indirectly as assumptions layered in lines of code. The institutional and cognitive aspects of national security decision-making will likely distort the expected benefits arising from faster operational tempo, more robust intelligence, and reduced risks, as fewer soldiers operate increasingly autonomous war machines. (p. 528)

There are some outstanding observations on China, GPC and AI development across this paper, starting on p.532, also with some more subtle points on Russia (bottom of p.534). The perplexities of AI, if used in how it is suggested, might boost the Chinese military power in a digital or collective AI decision-making aggregate that can off-set expected US technological and information advantages is a stark issue that I personally have not seen presented thus far in such a way. There is the implied counter to this that we (US, partners and allies) might also do the same, erasing any possible gained advantage? Or triggering an AI arms race?

This leads into another key take-away: could dangerous or risky adventures with AI decision-making in kill chains and autonomous systems/networks become another sort of arms race with worse possibilities than the nuclear one? That Russia has an ocean multi-purpose system-Status 6″ with proposed nuclear autonomous launch ability (p. 534) suggests the logical conclusion that a similar system could be put in the sky, or in space?

It is likely that the promise of AI will likely be offset by the complexities of human institutions and judgment. Modern defense bureaucracies will still wage pitched battles over turf and resources that slow, if not outright undermine, the adoption of AI applications. Service identity, ethos, and norms will intervene to limit the decisions and tasks soldiers assign to machines. In many ways, more transformative technologies like drone swarms are more likely to see rapid adoption for a range of purposes as they fit within the existing procedures assigned to precision strike. Less likely is the rapid, militaries-wide integration of deep learning and human judgment for the purposes of planning future wars, understanding adversary intentions, and ultimately using force in pursuit of political objectives. (p. 542)

At the bottom of p. 539, there are valid critiques of how we do analysis in the military- where we “smooth out the edges” of human friction (bureaucracy, laws, society, rules, emotion, culture, rituals, etc) that do not ever make things as clean as some analytical products offer. Were AI to gain this bias, such content could be tainted- but the speed and implied AI authority (supposedly objective) might generate a senior leader decision in a time-sensitive, dangerous security context might force a leader to overly trust an AI system that is flawed due to human design… leading to a human bias in the code enabling a second effect human bias of leader trust in a system… leading to a terrible military outcome. What might this have in relation to the Department of Defense and how we are looking to the future?

A soldier must know the confines within which technological outputs should be questioned, as should commanders and civilian leaders. Moreover, planners and operators must be compelled to harness AI technology within the confines of restraining mechanisms for oversight, something that necessarily implies making narrow AI systems even narrower. Adoption of AI systems must, in short, be shaped by human systems designed to mitigate pathologies of delegated intelligence. Doing so will not only shield militaries from the negative externalities of AI ghosts internally but will also clarify the signals to be interpreted by foreign adversaries. (p. 543)

Again, this is a deep, rewarding article worth your time.

Please share and disseminate as you see fit. Follow me on Medium, LinkedIn, and Twitter to keep up with this series as soon as a new one publishes.

Check out the last one here too:

You may also like

23 January 2023

Inspire: Using Design in Developing the Future of Armed Forces

by :
Read More
23 January 2023

Keeping off the Grass

Read More
23 January 2023

Initiate: Oct 27, 2021 Final Panel

Read More