Blogs

Robot
Image source: https://www.istockphoto.com/photos/killer-robot
26 January 2023

Recommended Innovation Articles (and Commentary) 4: Can Lethal Autonomous Weapons Be Just? (Noreen Herzfeld, JOMT)

by :

Original post can be found here: https://benzweibelson.medium.com/recommended-innovation-articles-and-commentary-4-can-lethal-autonomous-weapons-be-just-98d2e3f9b1ea

Today’s article is on artificial intelligence, autonomous weapon systems, and the ethics of warfare. This is a rather well organized article that raises many concerns about the rise of AI and automated systems and how there will be new ethical, moral and legal dilemmas for societies to deal with. The article is available here at the Journal of Moral Theology and has no paywall! I used this as one of my research sources in upcoming articles on AI and human-machine teaming coming out in April at the USMC’s Journal of Advanced Military Studies:

https://jmt.scholasticahq.com/article/34124-can-lethal-autonomous-weapons-be-just

There are a few thoughts I wanted to add to this:

1. This author, like many others writing on the rise of autonomous weapon platforms, presses hard on the “doom and gloom” aspect that “more AI plus autonomous systems at low cost around the world will result in far more violence.” There are some paradoxes to this that readers might consider- such as, a more precise, faster and non-emotional weapon system could in some ways reduce violence, to include errors, or potentially the superiority of the system at least locally could outperform human aggressors and marginalize contemporary terror tactics (hiding within civilian groups, shields, misdirection, masking). The ancient Chinese maxim of: “the killing of one can install fear and compliance in thousands” could be possibly applied in this regard. That said- there is an interesting tension in this article between human decided violence and that of AI decided.

2. Elon Musk and others have talked about super-fast AI, in listening or observing humans in “real time” might see all of us in slow-motion, talking but sounding like Whale Song… or take that scene from the Zootopia cartoon movie and the sloths working at the DMV… that they would process and experience reality in a way that makes us humans on the battlefield seem like we are standing still. In such a frame, how could human adversaries keep up? How might human decision-makers function in a kill chain if the chain itself exercises in nano-seconds and not minutes or hours?

3. The notion of “AI cannot experience or rationalize emotional computations” appears to be a double-edged sword. There are positions that such a limitation is an advantage in that humans, prone to stress or emotional decision-making are at times more dangerous than a cool-headed machine on the battlefield. The other side of that coin is when enemies send children with grenades at friendly forces… once the child throws the device, are they a combatant? If engaged even when they are holding it (they may drop it), what then? Sweden is dealing with this in Stockholm today with Somali Drug Gangs cunningly using 13–15 year old children to walk up and execute rivals in plain sight, and then drop the weapon (picked up by an adult as they flee). The Swedish legal system cannot prosecute children for murder- thus if caught they are put into juvenile treatment until they become adults and then walk free. In a law enforcement application, how might an autonomous AI security system treat such a scenario versus a human police officer?

4. For our military operating in the space domain in particular, check out p. 84 at the bottom. The author posits two directions for AI weaponization and automation to take. The first is to create sophisticated AI autonomous weapons that will wage war in our stead. No humans on the battlefield- at least no friendlies and this becomes domain dependent. Land being the trickiest, and inhospitable contexts like deep sea, air, and space being ones where only machines might fight other systems. The other alternative is for human-machine symbiosis- blending the best of both and hoping one cancels out the worst in the other. How might USSPACECOM or U.S. Space Force consider both directions? The first seems ideal for the proper space domain- where humans are scarce, expensive to put there, and once there mostly focused on survival and little else. The second direction makes sense for areas that involve space warfare, but are exercised on planes of human existence where human life can flourish (and conflict). So hybrid AI-human teaming operating perhaps on key terrain, with capabilities linking directly or indirectly to space domain or other construct.

Thanks for reading, and I hope you enjoy this article as much as I did. Follow me on Twitter for unique content on what I am currently reading (lots of book references and pages uploaded there), here at Medium for more refined work and research recommendations, and also over at LinkedIn too for more on the networking side of security studies and defense.

Check out more of this series here: https://benzweibelson.medium.com/recommended-innovation-articles-and-commentary-3-algorithms-at-war-the-promise-peril-and-the-78584710fe78

You may also like

26 January 2023

Hybrid Complex Adaptive Engineered Systems: A case study in defence

by :
Read More
26 January 2023

Explaining What a ‘Paradigm’ is… And How to Apply the Concept Toward Designing Novel Strategies, Plans and Security Activities

Read More
26 January 2023

One piece at a time: why linear planning and institutionalisms promote military campaign failures

Read More