Blogs

Soldier
Image source: https://www.independent.co.uk/tech/killer-robots-ai-technology-drones-cars-b1073016.html
4 March 2023

Recommended Innovation Articles (and Commentary) 15: ‘Can Lethal Autonomous Weapons be Just’- Noreen Herzfeld

by :

Original post can be found here: https://benzweibelson.medium.com/recommended-innovation-articles-and-commentary-15-can-lethal-autonomous-weapons-be-just-5c340c1ea9ed

Today’s article is an ethical take on autonomous weapon systems. Currently, AI capability is specialized (focused on specific tasks that often exceed human abilities in those same tasks) rather than general (how humans function and AI is currently far behind in cognition… why bots will easily fail the picture selection tests you sometimes take online to prove you are a human). that said, with the recent focus on ChatBots and public surprise (and horror) at some AI generated responses to questions, there today is an even wider uncertainty, and often misunderstanding to where AI is with respect to general human intelligence [general AI equals or exceeds the best human mind in all possible ways; whereas specific AI may equal or exceed narrow areas or tasks- Chatbots and chess winning AI are narrow, not general], and for warfighters, how AI will increasingly impact human-machine teaming and other previously human efforts on battlefields past.

This article is found in an unexpected journal location- at least, one might think that most AI discourse will occur in some scientific, mathematical, or computer programming journals. This one hails from The Journal of Moral Theology, Vol. 11, Special issue 1, (2022), pp. 70–86. If you think about it, general AI does not yet exist- we may still be decades away or perhaps a century (or maybe less, whispers Skynet?). The ethics, moral questions, and significant legal concerns are all over the horizon for society, in that engineers and designers are working on systems today that may unfold in coming years into areas that require entirely new ethics, belief systems, and legal ramifications. More on that in a few paragraphs. First, where is this article, and how soon might you begin reading it?

A link to the article is below, with NO PAYWALL, and you should be able to download it now. Or wait and read the rest of my commentary. : )

https://jmt.scholasticahq.com/article/34124-can-lethal-autonomous-weapons-be-just

This is a great and short article that hopefully stimulates some deep questions for militaries, our defense community, and policy makers. First, we need to break down what an Autonomous Weapon System is. It operates without human control, in that it already has pre-planned decisions that could be formatted by the software and operator prior to launch, or it could host some elaborate live decision-making tree that learns and adapts, but still generates decisions internally without external human control. An AWS might be in orbit, crawling along in the desert in conditions inhospitable to most living beings, or deep in the ocean at depths difficult to reach otherwise. It also might be a high altitude balloon, since those are in the news cycle currently. An AWS should, by definition (there are all sorts of nuances) do activity X without asking operators for permission; or… it may be permitted to do activity X, but cannot do activity Y without first reaching back to operators. We might see this with an AWS doing non-kinetic recon and making course adjustments autonomously, but when it spots a potential enemy target, it must contact human controllers to gain permission to lethal abilities onboard.

Here begins the legal, moral, and ethical questions. Suppose an AWS has kinetic abilities and does not need to ask, and it spots what it thinks is a threat. The AWS follows programming and engages the target, destroying it. Should the target prove to be an acceptable kill, there appear to be no concerns. What if we realize the system killed innocent civilians? Who is accountable?

Do we haul the AWS itself into court? No, as it is not a living human nor does it have any legal rights. It might make sense to identify the unit using the system, and bring that commander and perhaps the operator of the system in for charges. Would that work? Could those people argue that the AWS was programmed with faulty code, or designed poorly in that it cannot be their error, but the fault of the system builder, or the programming team that created the code? Would those people face charges? What about policy makers or heads of military organizations that authorized the use of these systems, or funded their implementation? Might the ground crew that maintain the system be at fault? The list goes on and on.

The author examines some of the upcoming ethical, moral, and legal concerns just over the horizon. As technology gets both cheaper and advanced, future AI systems will be accessible to terror groups, cartels, and non-state actors just as WMD is. Terror may take on new dimensions, in that how do we trace back an AWS to the source, if we lack the forensics or intelligence to follow the breadcrumbs to the operators? Indeed, cunning AI might be programmed to throw us off such trails to start with. There was that cringe-worthy scene with the Terminator where Arnold’s character explained how he went into hiding and became a salesman for window dressings and drapes. I know… just re-watch the first two and pretend they stopped there. : )

The author concludes with some suggestions on partial bans, international treaties and efforts to curb AWS proliferation, treating it perhaps as nuclear weapons are. This is a difficult proposal to make; it could work, but would it in the great power competition between nation states able to pursue advanced AI for weaponization? Further, there begs the question of how some nations might opt to avoid developing AWS, but still need the protection of a nation that already has AWS that can match or deter adversaries. Otherwise, we end right back up in how all arms races progress. Further still, AI is not like nuclear in that it will be everywhere, and the dual-use applications for commercial and military are already obvious and virtually everywhere from cyberspace to geosynchronous orbit.

This article explores some of these concerns, particularly what happens if we depart too far from the “human in the loop” or “human on the loop” teaming configuration. If the AI becomes its own loop, how do we rationalize the consequences that stem from that, good and bad? Again, we are not yet quite there, as all AI in use today is narrow, designed for specific activities, and while indeed there are AWS in use, they still have robust safeguards or the systems can only do particular activities… unless we think about how much real autonomy the Russian autonomous nuclear armed stealth sub actually has. Unfamiliar with it? See wiki:

https://en.wikipedia.org/wiki/Status-6_Oceanic_Multipurpose_System

So, even a narrow-AI system could wreck havoc for ethics and legality in warfare; but this is just the tip of the AI iceberg. Once more sophisticated AI move further toward General AI, they will be able to assume vastly more sophisticated battlefield tasks, including strategic planning, orchestrating combined arms attacks with networks of drones and human operated platforms, and engage in cyberspace and outer space in ways that our current operational capacities could not compete with. Questions in how the AI might make decisions differently than humans would is wonderfully shown in the movie “iRobot”, in the scene where Will Smith’s character recounts a terrible car accident where he is sinking in one vehicle underwater, and a girl is trapped in the other car also sinking. The robot calculates that it can only save one of them, runs the numbers, and determines that saving Smith’s police officer character is the correct decision. These are real ethical and moral challenges awaiting us in the coming decades, whether we are ready for it or not. But don’t throw out your Roomba just yet.

However, as Nick Bostrom and other AI researchers and theorists posit, the day when AI systems reach human capabilities at a general level may be a few decades or under a century away, which is a wide band. Note, an autonomous weapon system can (and in examples like missile defense systems on ships and bases, city-defense systems such as the Iron Dome) do provide faster-than-human, automated responses. The challenge of where the human sits in a lethal decision-loop will continue to be challenged as computer systems and AI research accelerate, the speed of new weaponized technology (hypersonics, quantum, swarm drone formations and more) force decision-making into smaller and smaller spaces, and the ethical, moral and legal discussions intensify- all occurring through international social information sharing (including misinformation, disinformation and more). Imagine future AI capable of imitating rational human behaviors convincingly for most, contacting millions at once over social media, and persuasively spreading misinformation to influence a political election, or sell stocks to alter prices in the market, or spark false outrage against an individual being targeted, or even swindling money from unaware relatives by perfectly recreating someone in the family who does not know they were hacked. While all of these terrible things can occur today at small scale, they are done by people against people. Humans cannot scale to what AI can do; we simply will not be able to compete, and we might even be unable to distinguish. Deep fakes today are the ‘Pong’ or ‘Pac-Man’ of where advanced systems will go… we are just getting started. The future technology is astounding, amazing, and also possibly terrifying.

When and if that day comes- how should we consider the warfare ethics of an autonomous weapon system, particularly one that may use significant weapons or abilities? What might adversaries or rogue actors attempt to do that is ethically beyond what our nation declares we will/will not do with autonomous weapons? Many of these considerations involve space defense and security, and the ethical dilemmas of what might happen with autonomous weaponized space systems in orbit with differing abilities and permissions, put into operation by our own forces, adversaries and competitors as well as industry makes for a wickedly complex topic. Should the human always remain “in the loop” or “on the loop”- where a person supervises or is the final decider on lethal actions? Or can we extend into having the human “behind the loop” or perhaps “chasing the loop” where the autonomous system moves faster and faster… with the slow human cognition appearing at a near stand-still, sounding as Elon Musk suggests “like whale song” when articulating decisions by speaking or typing?

Check out the article, and if you like reading commentary and following my suggestions for articles and research on design, security affairs, strategy, defense, and more, subscribe to this Medium account and get on my email notification too. Check out my Twitter feed for stuff I am reading or researching right now, and also connect with me in LinkedIn for this and more.

Miss the last one in this series? Here it is:

https://benzweibelson.medium.com/recommended-innovation-articles-and-commentary-14-from-war-2-0-e234ae69e849

You may also like

4 March 2023

The Rise of Design

Read More
4 March 2023

Monthly Campfire – November 17, 2022

Read More
4 March 2023

Cultural Transformation Game Workshop

Read More