Image source:
9 May 2023

“Artificial Intelligence and Mind-reading Machines” [Recommended Innovation Articles (and Commentary) #25]

by :

Original post can be found here:

This article being recommended is by Aura-Elena Schussler and titled “Artificial Intelligence and Mind-reading Machines- Towards a Future Techno-Panoptic Singularity”, and is available over at the academic journal Postmodern Openings; 2022, Volume 11, Issue 4, pp. 334–346. There is a link below taking you to the article, but unfortunately this one is behind an academic paywall. I always recommend military readers reach out to their local base librarian to get a PDF, and for others, try your local library or university!

This article is an interesting dive into the future of AI- that of general intelligence, not narrow AI. We all are familiar with narrow AI- in that the computer that beat the best human Jeopardy opponents is an example of narrow AI doing things well beyond human abilities (and winning at chess, go, and many other games are other examples). Narrow AI is fragile- it can only do what the code is programmed for. If the host of Jeopardy suddenly announced mid-game, “to spice things up here, we will allow contestants to phone a friend 1x time per round, and also now if you get the daily double, you can bet an amount that is subtracted from one opponent’s score”- the narrow AI system would immediately fail. It cannot adapt- that is an issue (but also a failsafe, and important distinction in complex warfare) on narrow versus general AI. General AI does not yet exist, except in science fiction and futurist work. Now, I am taking VERY broad brush strokes here in this commentary, but if you want to dive deeper into AI matters, check out some other of my Medium publications on the topic:

Back to the article at hand and good old, terrifying AI mind-reading. But seriously, the author (Schussler) addresses what might unfold as we reach that threshold of actual general AI (equal or superior to human thinking in every possible way) and pass it. A functional general AI system might just be a box with a screen; indeed- it probably needs to be at first. One of the issues with general AI is that once AI can accomplish parity with human intelligence in all possible ways to evaluate, things could go sideways fast. There is no upper limit to AI- at least, none that currently limit carbon based organic life forms. Even the smartest of us can only cognitively juggle several things at once. The basic calculator exceeds our computational abilities in our heads.

Recent media coverage and debate over Chatbots and if they can produce equivalent art, music, or literature is all the rage today. Right now, this is narrow AI working in chatbot. Were a general AI system to exceed human abilities in all things, this would include military strategy, leadership, and more. This would be a powerful, game-changing weapon or tool, but also a huge risk as well. This is why AI researchers suggest AI systems in the future that get toward general intelligence need to be put into very specific spaces- essentially prisons. Otherwise, the genie could escape the bottle.

On p. 342, the author explores Nick Bostrom’s idea of an AI “Singleton”- a super intelligent entity capable of assuming all authority and decision-making for an entire society. Yes- this would be like Skynet, or that other one from iRobot, or even the Borg Queen (sort of- but the Trekkies out there will correct me). What might happen to humanity in such scenarios? Be warned, this article goes deep into philosophy, ethics, free will, and whether AI might reach some areas we tend to disregard as science fiction or fantasy today in any pragmatic outlook from now through 2028.

But remember, the New York Times in 1903 published an article claiming humanity needed ten million more years to be able to fly, which published nine weeks before Kitty Hawk. Sometimes, we surprise ourselves with how fast we bring about world-changing innovation and technological progress. Often too, we craft the new tools before we consider the deep ethical, moral, and societal impacts of the new thing until well afterward. We fail to understand how emergence occurs in complex reality, and we keep expecting tomorrow to continue to look mostly like all the yesterdays we think we understand. For more on emergence in complex security applications, see:

This mind-reading AI article (Schussler) takes contemporary topics such as human-machine teaming, AI integration, decision-making “in the loop”, and AI ethics/morals/legality and put it into overdrive. Yet the fantastic is where innovation comes from, never the mundane or orthodox. This is why theory and experimentation and not doctrine bring creativity, change, risk, and innovation in warfare. Then again, perhaps general AI that exceeds human abilities could craft innovative doctrine and break that paradox?

You may also like

9 May 2023

The Continuous Judicial and the Judiciary Wars in Colombia: Critical Dimensions of Insurgent Political Warfare

Read More
9 May 2023

Carl von Clausewitz and the Invention of the Conservative Nation-State: Retrieving Instrumental Reflexivity in the Strategic Tradition

Read More
9 May 2023

Horizons: On Strategy, Agency and Humility

Read More