Blogs
On ChatBots, Machine Learning, the Future of War, and Dogs Versus Ants
by : Ben ZweibelsonOriginal post can be found here: https://benzweibelson.medium.com/on-chatbots-machine-learning-the-future-of-war-and-dogs-versus-ants-f4d871dbff8d
I have two articles coming out in April-May over at the U.S. Marine Corps’ Journal of Advanced Military Studies (JAMS) on AI, human-machine teaming, and how these developments may entirely transform what we understand war is, aside from disrupting most all current conventions on what warfare unfolds as in complex conflicts. I will not go into details here that those extensive articles address, and if you subscribe to follow me on Medium and get email updates, you will get first notification when those two articles hit the press. JAMS does a digital and a print version, and best of all, JAMS has no PAYWALL.
Those two articles are titled “The Singleton Paradox: On the Future of Human-Machine Teaming and Potential Disruption of War Itself” and “Whale Songs of Wars Not Yet Waged: The Demise of Natural Born Killers on Future Battlefields”. These articles explain key differences in narrow and general AI, and how potential AI “arms races” may occur that exceed the dangers associated with the nuclear arms race. Further, we fragile, slow humans may be replaced entirely from future battlefields faster than our military organizations are willing to admit. Remember, the US Army argued as late as 1938 that horse cavalry forces must remain central to the next war (World War II) and, in 1930, a young Major Patton (“Old Blood and Guts”) would pen several articles in the Cavalry Journal insisting that a horse-mounted cavalry soldier could defeat a tank, and were superior in reconnaissance to that of airplanes. Germany would invade Poland with tanks and planes in 1939 (and horses too, but the vast majority for logistics, not combat). We perpetually fall into patterns of insisting the next wars will still extend our military beliefs and identities so that any change tolerated is the incremental, supportive type that makes our ritualizations remain relevant. We are entering a serious transformative period today with AI.
ChatBot and various improvements and competitors is the rage today. Some of you likely clicked this Medium post because of that word in the title. Fans of South Park this week are overjoyed with how the writers blended Chatbot content and even had it write part of the episode (with dramatic effect on social commentary). Companies are freaking out and banning the system, fearing the loss of trade secrets, legal suits, catastrophic loss of control, plagiarism, and even “do we need to pay AI for the work it does for us?” Worst still, many users of this AI lack appreciation for what it is, and what it is not. Chatbots are not sentient- they do not know anything but how to follow the code, despite the paradoxical fact that the programmers use open machine learning and they do not necessarily know how Chatbot learns, grows, and changes. That said- these Chatbots are narrow AI. As sophisticated as they might seem, and how recklessly humans may abuse them by forming personal (illusionary and self-dependent) relationships, misuse the AI as cheap therapists, or even isolate themselves socially by avoiding human contact and adapting to exclusive AI interactions… we will be creating our own simulacra (see the Matrix movies, or read Baudrillard’s book that they were based upon).
General AI is different. It does not yet exist. It could, and potentially we might be closer than expected due to narrow AI developments (one does NOT flow into the other), quantum computing, and more. Narrow AI does fantastic things beyond a human in specific (hence, narrow!) ways- and it has existed since the beginning of computing technology. That calculator you might have used in grade school or on your wrist in the 1980s? Narrow AI- in the broadest and weakest sense. A machine that artificially can out-perform a human is narrow AI; Deep Blue can beat the best human contestants on Jeopardy! but if the host changed the game rules halfway through the game (daily doubles now require a coin flip… and you get to phone a friend 1x time per round), the AI could not adapt until programmers pause everything to change the programming. General AI will perform equal (and likely soon after, better) than a human in every possible dimension. General AI will crush us not just at games, but poetry, art, science, philosophy, music… anything we can do, General AI will compete and likely outperform us. Again- it does not yet exist, and we cannot confuse Chatbot or similar narrow AI with what General AI will become. It is like apples and hand grenades- they both are round.
AI programmers and scientists are worried about how to contain General AI because the moment it can “think” beyond our own human abilities (suppose it improves itself and doubles its IQ to something like 500, or 1,000), we cannot comprehend what exactly it is doing, nor might we understand if it is being honest or deceptive (both insufficient human terms here). We leave the boundaries of the known, because just as with alien life, we only know what we know based on our human existence. Star Wars and Star Trek feature most aliens as humans covered in masks and fur who still sit at the table and eat and drink and talk… like humans. We lack the imagination most of the time or cognitive flexibility to contemplate that which is unlike anything our species has previously encountered. AI, in a general sense, will be just that. Now, here is the problem (or, one of many problems)-
How might general AI with astounding intelligence beyond ours exist and interact with our species? Keep in mind that it likely is inappropriate to scale this, where the average human has an IQ of 100, the smartest humans ever alive were perhaps at 200–240, and a General AI might bump itself to 500, 1000, or 10,000. Scaling fails here. Or, to quote an AI expert, “if you think that humans are running, and AI runs faster, you are doing it wrong. Think of AI as a car. They are no longer “running.” They are doing something different, but it is very much faster than we humans can do that thing.” Another manner of contemplating this is with dogs and ants.
Most all of us love dogs. Anyone that does not is, in my biased opinion, an heartless monster. Or a cat lover, which is still the same thing. But for those of you that have dogs or know about dogs (in a western sense at least), we keep them as pets, or we use them for work in specific ways that play to the advantages of a dog that humans lack We have dogs sniff out bombs and check for cancer, but we do not ask them to do our taxes or act as high school principals. Dogs are not as smart as humans, although they are clearly intelligent. When you leave in the morning for class or work, your dog knows you are leaving, but they do not grasp reality beyond that. They wait until you come home, and whether it is ten minutes or ten hours, they are excited that you are home again, and where you went was simply a black box to them. Humans are several orders of magnitude more intelligent than dogs, but dogs do have abilities that are vastly beyond our own, such as with smell, running, bite strength. In conceptualizing reality, dogs cannot contemplate reality on the same cognitive plane that we humans can. Remember this- a cognitive plane.
When you walk outside your house and your poor dog is at the window, wondering when you will return, you might spot a trail of ants outside your front door. What do you do? You might make a mental note to buy some ant poison, or you might ignore them as they are good for the soil. You might be a cunning trouble maker and drag your foot across their chemical trail to watch them get lost and confused. Or, you might stomp on them because you don’t want ants entering your house. As Mallory Archer would lament, “this is how we get ants.” You likely never once worry about the ants as you would your dog. We associate certain creatures with one plane of cognition, and others at some abstract hierarchical ordering below that. This gets blurry, especially when you run into people that have pet snakes and believe they have emotional relationships with reptiles (my opinion, but in this case mine is correct).
We do not really “care” about ants except in abstract ways, but we do assign different relationships to creatures we do care about (cats, dogs, snnnnnakkkkkkeeeessss) even though we also know deep down that those creatures cannot engage with us as a fellow human does. This is also where we misunderstand Chatbot, as many users attempt to give it a personal name, and fail repeatedly at engaging in human social activities with a narrow AI program that is utterly incapable and operating on a different cognitive plane. When a reporter freaks out because the Chatbot wants to run away with them, they assume (incorrectly) that these narratives originate from a human mind. They do not- the Chatbot is searching every romance novel online and determining abstract patterns to form a response. It does not know what it does, but what it does is astounding and often eerily close to things a human might say. Yet it is not sentient. So, when and if General AI is developed, and that entity is “self aware” which likely cannot be considered in human frames, and that AI is then able to contemplate well beyond human limits, it will be on a SUPERIOR (and different) cognitive plane to us. For the first time in our species history, we will be less intelligent than something on this planet. So what does this mean? Back to dogs and ants.
Suppose our General AI views the species that created it as we feel about dogs? It may go off “to work” and things in the universe we lack the ability to even understand, and it will not really bother to explain these things to us. It may take care of us and see our existence as a companion, something to help it with things we cannot understand, but it views our existence together as symbiotic, not parasitic. This could result in us having wonderful, comfortable lives and we would be ignorant and unable to conceptualize that we are indeed the pet. As Cypher said in the Matrix, “ignorance is bliss.”
But what about ants? We know ants are intelligent in a swarm entity sort of existence (you can read about that here!)
Ant colonies are astoundingly intelligent, but in a different way to how we think. Remember cars versus running. Swarm entities are probably swimming, not running. Individual ants in a swarm collective are stupid- but a bunch of ants are smart (collectively). Hofstadter wrote a wonderful chapter about this in his amazing book located here where he explains a relationship between a sentient human-styled being and that of a swarm collective which was an ant eater and the ant colony:
In that chapter, the ant eater entertains the colony but eats several individual ants. The others there are shocked: “how can you eat ants that are part of your friend?!!” The ant eater explains that the friendship exists at the macro-colony level where the swarm intelligence resides, not in individual stupid ants that taste good. Now, return to you standing outside your house with ants under you. Regardless of whether you step on them, put them in an ant farm for your kid, spray with poison, or ignore them, you do not treat ants like your dog inside staring at you from the window with those sad, sad puppy eyes.
Will General AI treat us as ants, or dogs? One future seems comfortable if not a bit disappointing to a species that until recently assumed the heavens revolved around them. The other could be cold, tragic, and an existential crisis that is well captured in our science fiction (Borg, Skynet, and so-on).
This may be decades away, or a century, or never. It also may be closer than we think. If you are interested in more, check out an amazing book called “Superintelligence” by Nick Bostrom, where I was inspired to set out on the two articles coming out soon in JAMS.