Blogs
An Inquiry into Why Militaries are Terrible at Innovation: “Beating a Dead Horse: How Institutions Perpetuate Concepts into Irrelevance” (Part 3 of 5)
by : Ben ZweibelsonOriginal post can be found here: https://benzweibelson.medium.com/an-inquiry-into-why-militaries-are-terrible-at-innovation-beating-a-dead-horse-how-institutions-227dbbbd698c
Modern warfare, largely executed in the last three centuries in a Westphalian, nation-state versus nation-state or aligned non-state actor, draws extensively from a scientific framing where the intuitiveness of ancient heroic warriors is tempered with inductive logic, empirical testing of a hypothesis, and an incremental, ordered reality. Imagination remains a key aspect of modern warfare, but often it is relegated to a secondary or spontaneous, magical quality reserved for the “geniuses” that Clausewitz argued could ignore the orthodoxy of warfare that all others must follow. Clausewitz, drawing from German Romanticism of what was a nineteenth-century genius cult movement,[1] sought to restore some of the metaphysical into what in the preceding few centuries had been rendered mechanistic and near completely Newtonian. To this day, Clausewitz’s maxims on “genius in war” remain largely accepted in modern military theory and education. Genius simply emerges and might break all rules, change the course of history, and institutional attempts to imitate, mass-produce, or reapply traditionally successful processes of past geniuses toward future wars are entirely wrongheaded.[2] The genius leader innovates, and then the rest of the profession study the results and adapt to the novelty. While this is the dominant position across the U.S. Department of Defense and writ large most of the western, industrialized world, it also marginalizes (or mystifies) how innovation actually occurs.
Innovation in security affairs, just as in any other human activity, is an emergent, non-linear, and most often an institutionally critical act of creative destruction. Pragmatic reasoning and orderly extensions of current practices into new, improved ones only occurs in stable systems where tomorrow unfolds much as yesterday did. In complex reality, such patterns clearly exist and often are found in reliable patterns of conformity, order, and predictive behaviors that do include conflict and organized violence between different groups of people. If pragmatic reasoning and problem-solving mindsets that rely upon historically and culturally sanctioned tools are the preservation mechanisms of institutions, organizations, and societies, they operate in paradox to the innovators, mavericks, and divergent actors that thrive in the margins of institutional power and reach.[3] Pragmatic strategy and correlating warfare into “theory for action” projects the modern scientific interpretation of far earlier ancient Greek natural philosophy (heroic individual and action to achieve preconceived future goal) operates only within the institutionalized limits of knowledge, historic experiences, shared values and the overarching belief system that governs the social paradigm in operation. Rapoport explains this mindset with:
The pragmatic orientation is the problem-solving orientation. When the existing state of affairs does not correspond to a desired state of affairs, the problem-solving attitude turns attention to the concrete nature of the discrepancy. To formulate a problem means to spell out in concrete terms- that is, in terms that can be related to concrete observations- just what it is about the existing state of affairs that distinguishes it from the desired state. One then scans one’s memory or experience or store of knowledge to see whether means are available to remove undesirables features or to add desirable ones to what is given. If these deletions or additions can be accompanied by manipulating matter, one resorts to technological solutions. If changes are required in relations among people, solutions can be sought in the realm of politics. One takes account of where one is and at each step decides where one wants to be next, given the constraints of the situation and the means at one’s disposal. The past is examined only to see which means have worked and which have not in (presumably) similar situations. The horizon of the future is limited by how much can be expected to be accomplished starting from where one is.[4]
Such thinking does extend the historically useful into tomorrow’s emerging requirements, but often at the risk of adversarial innovation occurring in parallel to our own institutional blinders. This conceptualization of innovation is one that is “a simplified image of the nineteenth-century ideal of the mechanics of progress,”[5] where incrementally, measurably, and in careful extensions of past understanding into new developments, the institutional frame that functioned yesterday should continue to work with tomorrow’s challenges within this reverse-engineered framework of pragmatic orientation and “known solution paired to seemingly distinguishable new problem” formulization. Rittel explains this as “evolutionary innovation” which modifies existing devices and patterns, “improves them from a certain aspect but leaves the rest unchanged.”[6] In other words, one may choose any crayon to color the picture, but the institution produces the preconfigured drawings and organizational compliance limits creativity to new color combinations within the lines. This permits incremental innovation recognizable and complicit to all institutional norms and beliefs, while rendering any radical transformations or disruptions all but impossible. Naveh, the founder of the modern military design movement, remarked that military institutions are more like religious organizations than the scientific ones they imitate, where “the holy books of military practice are to be obeyed, so that one is told what to think… one cannot question this ever.”[7] Heretics are cast out, and visionaries remain in the margins as long as they generate the alien and unfamiliar. Above all else, all future artworks might be realized in myriad colorful outcomes, but all will be firmly within the preconfigured lines to color within.
We tend not to be able to shift outside of our institutionalized limits because we are unwittingly operating our shared paradigms collectively, just as when a person might be searching for their glasses all over the house, but suddenly realizes they have been there on their face the entire time. Paradigms are structured to preserve the infallibility of their authority for making sense of reality so that institutionally shared beliefs, values, and identity are maintained and protected. This means it is exceptionally hard to realize one’s own paradigm in operation, tougher still to venture beyond those paradigmatic limits, and near impossible to appreciate reality using an alternative paradigm and bring those new perspectives back into the institutional sanctum freely or easily.[8] Often, when groups operate under alternative social paradigms are in conflict, they shout past one another not because of different words being used, but that they use different dictionaries for the same words that render entirely incommensurate comprehensions of the same reality. Thus, one can comfortably insist that all future military strategic innovation must begin at the pragmatically grounded, institutionally understood baseline that defines the current (legacy) system frame and is encoded by the operational paradigm our group espouses.
Orthodoxy and convergence to institutional norms will stamp out the aforementioned “flights of fancy and overactive imaginations that make theory useless as a guide to practice”, but such a warfighting mindset kills initiative and improvisation when it crosses what the institution declares as “useful practice.” Calvary horse charges for thousands of years were quite useful practice and advocated by institutions well into the interwar period between the two World Wars. Even General George Patton would, in 1930 as a Major and the former commander of the U.S. tank school in World War I, author articles in Calvary journal in defense of horse cavalry as superior to armored tanks.[9] As an example of the earlier observation that institutional opaqueness and paradigm incommensurability might conflate military history with that of heritage, Patton would argue on behalf of the proud, historic institution of Army horse cavalry units with:
A third limitation that is imposed by natural and artificial features of the terrain. Obstacles that appear trifling to a well-mounted Cavalryman often put a serious handicap upon machines… In rough going, the wheeled machine has less mobility than the Cavalryman, and its weapons are almost useless because the gunners cannot take good aim. In close country where the machine has to stick to roads, its value as a fighting vehicle is materially reduced. The present Cavalry weapons, if resolutely and resourcefully used, are sufficient to neutralize wheeled vehicles on the roads.[10]
Patton would also, from the institutionally protective stance of a horse cavalry officer in 1930, not just articulate the future warfare continued relevance of horse cavalrymen on battlefields against mechanized tanks, but also competing against aviation. Still in the interwar period where militaries faced profound uncertainty in how the next war might unfold, Patton held to the pragmatic orientation that Rapoport explained earlier. Below, he examines the past as Rapoport described earlier, “only to see which means have worked and which have not in (presumably) similar situations.” Patton, seeking a rational extension of horse cavalry and his own skills and mastery of this manner of warfare, argues that future wars will still have the horse cavalryman necessary to accurately provide reconnaissance on the ground that aviation will not be able to produce:
The ability of the airplane to execute strategic reconnaissance irrespective of the activities of ground troops has to a degree deprived counter-reconnaissance [using horse cavalry forces] of its strategic importance; still as we have seen the airplane does not secure sufficient details even to wholly fulfil this mission. Since these missing facts must be obtained by ground troops and since they alone are capable of tactical reconnaissance, the necessity for counter-reconnaissance [using horse cavalry forces] is still important.[11]
Patton of course would go on to fame and glory as “Old Blood and Guts”, the charismatic and aggressive Commander of the Seventh Army in the Mediterranean Theater and then the Third Army after the Allied invasion of Normandy in World War II. He would command continental forces where horse cavalry would no longer exist, nor would there be much of a need for horse-led saber attacks using the Model 1913 Cavalry Saber he invented for traditional hand-to-hand warfare that defined many centuries of earlier conflicts. Indeed, despite his early thinking on horse cavalry in the 1930s, Patton would go onto embrace mechanized armor and aviation once the German Army invaded Poland in September 1939.
A calvary officer steeped in the heritage of his unit’s form and function, Patton would unwittingly conceptualize “how the future ought to be” not by remembering the past to imagining novel futures, but by imagining the past so that military identity (heritage, beliefs, purpose) would unfold as one can remember it so.”[12] Patton is certainly one of a multitude of prominent military leaders and strategists that tilted toward a pragmatic, somewhat institutionally self-serving, but relatively common bias of preparing for future wars by studying past ones to glean enduring or stable concepts. Indeed, as late as 1938, the last Chief of U.S. Cavalry fiercely defended the notion of independent horse mounted cavalry troops just as Patton did and would vigorously object to the mechanization of cavalry forces even after the German blitzkrieg in Poland and France.[13] Often, such instances only become illuminated in hindsight, further troubling military theorists seeking innovation, change, and improvisation upon earlier yet not necessarily causal forms in war.[14]
The Sinkable Billy Mitchell: Innovators are often Burned at the Stake
It would be unfair to critique the efforts of military strategists such as General Patton for failing to anticipate how much the next war would differ from the last without considering what happened to those innovators that could pierce through the clouds and fog. Brigadier General Billy Mitchell, also a World War I veteran and hero like Patton, would anticipate the rise of airpower, and fight his institution throughout the interwar period that would result not in his ascension in power and influence, but his court martial and forced retirement. Patton’s experiences in the interwar period differed strongly with Mitchell’s, and some must be in part to how modern military institutions deal with thinkers who advocate incremental change that does not alter established orthodoxies, and those that threaten to blow up the entire system to redesign anew. Patton would go on to command at the highest levels in the next war, while Mitchell would in forced retirement continue to write about air power and died in 1936 of a heart attack. The Army would later name the B-25 medium bomber in 1941 after Mitchell, acknowledging the intellectual debt the military institution owed the man and his ability to see much further than his peers. Mitchell, despite being a visionary, would be frustrated with fighting the institution to move quickly, often failing to convince the bureaucracy of the needed transformation at hand. Like Patton, Mitchell would say outrageous public comments that would get him into trouble. Unlike Patton, Mitchell sought not to preserve the legacy system for tomorrow’s war, but usher in a radically different future force detached from historic norms and established practices.
Traveling home on the Cunard liner Aquitania after World War I and his own audience with King George V after being awarded the Croix de Guerre by France, he told passengers and the media that “the [American Army] General Staff knows as much about the air as a hog does about skating.”[15] Mitchell, despite working under a Director of the Air Service who was a career artillery officer and had never once even flown in an airplane, would attempt to drag the American Army and Navy into the future by demonstrating how air power would, in fantastic and almost unimaginable ways, destroy the existing military order. First, he targeted the Navy’s most important weapon system and symbol of modern naval power, the battleship.[16] The battleship presents both a historically significant and also a centuries-old naval symbol of service heritage that would generate powerful institutional beliefs and assumptions about maritime warfare past, and how emergent and future war ought to unfold. Mitchell would disrupt the Naval institution by introducing emergent theory that had not yet demonstrated any conclusive “proof”, of which such evidence would require significant change and uncertainty for future paths.
Mitchell argued in the interwar period that battleships were too slow, too expensive, and unable to keep pace with the increased precision and lethality of airplanes. As the Army’s responsibility was bounded by the edge of continents, coastal defense as a service mission was traditionally that of the Navy. Mitchell argued the Air Service should take over and secure the budget for this too. [17] He wanted to demonstrate that not only could airplanes be flung off flat deck ships (to be named aircraft carriers in time) and could rapidly reach and strike any adversarial naval threat, but a battleship itself could be destroyed by aircraft dropping lethal ordinance, making the massive and expensive weapon platform extinct. Admiral William Benson, Chief of Naval Operations in 1919, fought Mitchell’s experimental attempts while also secretly trying to purge the Navy of their entire aviation division. He famously stated that he could not “conceive of any use the fleet will ever have for aviation”, and that “the Navy doesn’t need airplanes. Aviation is just a lot of noise.” Mitchell would announce a demonstration of how a modern airplane could sink a battleship, which would further enrage some of the Navy leadership.
In 1920, as Mitchell was preparing and announcing to the media that a demonstration would be made of airpower supremacy over battleships, the Navy arranged their own test. This test appeared to be prepared so that the institution could pragmatically defend existing beliefs about how the future war should extend existing strategies, beliefs, and historically “proven” truths about maritime warfare. Returning to Rapoport, “if changes are required in relations among people, solutions can be sought in the realm of politics.” The Navy would allegedly prepare the test to fail, and despite the ship being sunk by multiple airplane hits, the Navy would not promote the results and attempt to mask the effectiveness of air power by using dummy bombs filled with sand.[18] Rapoport’s statement that: “One takes account of where one is and at each step decides where one wants to be next, given the constraints of the situation and the means at one’s disposal” applies in how the Navy (at least in this specific and prominent area) did not want to consider the fantastic disruption that a future war might bring, if it meant disrupting so many contemporary and cherished symbols, routines, doctrine, and honed skills. Like Paton and the horse cavalry, they wanted the next war to retain saber-waving charges despite the emerging reality that such things would be suicidal.
Mitchell would prepare a joint test between the Army and Navy, with Secretary of the Navy Josephus Daniels declaring in response, “I would be glad to stand bareheaded on the deck or at the wheel of any battleship while Mitchell tried to take a crack at me from the air. If he ever tries to aim bombs on the decks of naval vessels, he will be blown to atoms long before he gets close enough to drop salt on the tail of the Navy.” [19] The Navy set the rules for the test and attempted to make them as difficult as possible to prevent Mitchell from threatening the enduring supremacy of the battleship. No aerial torpedoes were allowed, and the planes had to sink a ship only with two hits. Mitchell had special 2,000-pound bombs made for the test, as the largest in aviation inventory at the time were insufficient to sink a battleship.[20] A week before the test, Mitchell’s own director would, under pressure from the Navy, try to fire him. Despite all of this, Mitchell would endure and conduct the test, causing significant military and media attention to cover the contentious exercise.
The Navy prepared the German battleship Ostfriesland, which was engineered to be nearly unsinkable and took 18 hits from British battleships in World War I to be the main target. Naval officers knowledgeable about the ship’s design told The New York Times that the fliers “will never sink the Ostfriesland at all.” [21] While the Navy rejoiced at the first two days of experiments using smaller bombs that did not sink the ship, on the third and final day of the exercise, Mitchell ordered the 2,000-lb bombs to be used in near-miss bombings that would “form a water hammer” and disrupt multiple sealed compartments on the ship. Six bombs broad sided the ship, and after 22 minutes the battleship sunk, shocking the Naval officers witnessing the demonstration. Still, the Navy institution would fight back in the early 1920s postwar period, continuing to declare that aviation might supplement naval power, but never replace it:
“The battleship is still the backbone of the fleet and the bulwark of the nation’s sea defense, and will remain so long as the safe navigation of the sea for purposes of trade or transportation is vital to success in war,” said the Joint Army and Navy Board report on bombing tests, made publish August 19th and reported in The New York Times. “The airplane, like the submarine, destroyer, and mine has added to the dangers to which battleships are exposed, but has not made the battleship obsolete.” [22]
The battleship would not be obsolete, yet. The Navy would use them extensively in World War II and beyond, with the final combat application being in naval fire support during the First Gulf War by American Forces in 1991. Yet the symbolic supremacy of the battleship ended with the Mitchell demonstration, as the aircraft carrier would prove decisive in naval combat in World War II, along with air power unlike previous warfare. Once more, military institutions would attempt to extend obsolete or outdated beliefs into future wars using the same pragmatic orientation where known solutions are paired with emerging problems in an effort to retain symbols, artifacts, and patterns of behaviors that often are nested in deep, institutionalized belief systems, doctrine, military practices, and organizational identity. Cold War theorist Carl Builder would critique modern military services as institutionally grounded in their own ritualized narratives stemming from a ‘golden era’ World War II orientation that would extend many decades later.[23] Indeed, RAND would revisit Builder’s original premise in 2019, maintaining most of his original findings where military services seek to win the next war entirely via institutionally self-relevant beliefs and values established in previous conflicts.[24]
As fantastic as the aircraft carrier was for interwar period military services and strategic theorists focusing on how one needed to prepare and equip for the next war, today’s carrier group is facing a similar potential elimination in 21st century wars. Hypersonic missiles, swarms of drones, and artificial intelligence capable of synchronizing actions faster than any contingent of human sailors can are disrupting the pragmatic, ordered war paradigm with the fantastic and unimagined.[25]
Militaries need to reconceptualize how innovation occurs and implement an organizational awareness of this within their cultures and patterns of behavior. For this to happen, the military institution must not only adjust decision-making methodologies, doctrinal terminology, and examine preferred military theories and philosophies. The entire war paradigm must be clearly and excruciatingly examined so that the legacy mode of strategic thinking and organizational management of organized violence be clarified in no uncertainty. The earlier dominant position articulated by Sheldon and Gray must be inverted. Flights of fancy and overactive imaginations are what generate not useless theory as a guide to legacy practices, but useful and novel theory as a guide to necessary future opportunities otherwise outside the imposed institutional limits of the existing war paradigm. To explore beyond our paradigmatic limits, we first must admit that all social paradigms are incomplete with respect to complex reality, and that no matter how insistent our own paradigm might appear, any progress toward change and innovation lies not within those limits, but beyond them.[26]
For spacepower, cyberpower, all-domain warfare, irregular warfare, and in emerging areas such as quantum, artificial intelligence, human-machine teaming, the intellectual guardians might declare science fiction and overactive imaginations off-limits, but they are the guardians of yesterday’s institutional relevance and identity. The innovators, provocateurs, heretics, and improvisers that can imagine and anticipate what most guardians are indeed ignorant of can usher in necessary change ahead of rivals and competitors. Such a military ‘pagan’ can disrupt, challenge, and shatter the existing institution through ways that likely seem alien, ironic, or perplexing to those of the orthodoxy attempting to silence such radical suggestions.[27] It is not the worst excesses of innovators that militaries must fear, but the worst excesses of intellectual guardians fixated on preserving the existing, legacy war paradigm at all costs. Both groups are difficult to identify in the swirling, emergent landscapes of the present, but often those that decry horse cavalry charges or insist on standing bareheaded on battleship decks will persist in our military institutions, ready to fall on their cavalry swords to protect the next artifacts, symbols, beliefs or theories of war that they believe must remain unmolested. Indeed, such arguments are always couched in the language of pragmatic realism and historic ritualization that provides military identity.
How might a military move from framing warfare and change in such incremental, pragmatic, problem-solution and other Newtonian styled modes of conceptualization? Design theorists have for much of the twentieth century (and largely outside military forces) positioned the fantastic and divergent modes of thinking as the primary cognitive space to begin any discussions on the future. First, modern militaries must confront their own war ontologies in that complex reality will not permit highly efficient, static ‘problem-solution’ constructs to last very long, if at all. Novelty, improvisation, experimentation, and “creative destruction” must be integrated into any war paradigm so that transformation is welcomed, regardless of what sacred cows must be led to slaughter. Dorst explains:
The core paradox of innovation management lies in the fact that the ideal image of an organization still is that of a well-oiled machine where efficiency reigns supreme. The need to create novelty is at odds with this model, as novelty inevitably disturbs existing processes and might be accompanied by “creative destruction.” How do we find a balance between routine operation and the need for novelty and change in an organization? To answer this question, the field of innovation management has had to become a hybrid: it combines a rich mix of subjects in policy-making, strategy formulation, organizational structures, and management styles with elements of design theory… and fundamental analyses of the notion of innovation itself. Combined, these create a context for thinking about innovation within organizations.[28]
Modern militaries fixate on preconceived objectives and goals despite acknowledging that complex reality, particularly the chaotic and emergent characteristics of war restrict such actions to temporary, localized, and proximate (short-range) opportunities. Western society, based upon ancient Greek philosophies concerning the natural world again order an individual’s heroic actions that accomplish the desired (and predicted) change in the world. Future goals and objectives first are rendered in the same pragmatic rationalization provided earlier and reinforced in how institutional defenders will seek to extend the past constructs cherished and endorsed by the institution into tomorrow’s uncertainty rather than innovate toward a novelty that destroys these beliefs and institutionally sanctioned ideas. Stanley and Lehman clarify this with:
Though often unspoken, a common assumption is that the very act of setting an objective creates possibility. The very fact that you put your mind to it is what makes it possible. And once you create the possibility, it’s only a matter of dedication and perseverance before you succeed. This can-do philosophy reflects how deeply optimistic we are about objectives in our [western, modern] culture. All of us are taught that hard work and dedication pay off- if you have a clear objective…. Objectives might sometimes provide meaning or direction, but they also limit our freedom and become straitjackets around our desire to explore. After all, when everything we do is measured against its contribution to achieving one objective or another, it robs us of the chance for playful discovery. So objectives come with a cost. Considering that cost is rarely discussed in any detail.[29]
The western, scientific-inspired, factory-engineered mode of moving pragmatically and incrementally through complex reality does not lead to military innovation except those discoveries that were so close to being realized, normal decision-making and managerial analysis would undoubtedly have run into them eventually.[30] For consequential innovation, organizations must abandon the fixation on pragmatic thinking and adherence to routinized policy making or strategic formulation. Instead, they need to understand and appreciate how innovation requires a creative leap. Rittel explains that creative leaps are “something indescribable and beyond reason, but nevertheless indispensable and important.”[31] Such ideation and exploration occur in the fantastic, where fantasy and imagination are exceptional, disruptive, and likely unsettling to the institutional defenders. Creativity is ill-defined and seems unscientific in that militaries position it with “military artistry” rather than any scientific logic or foundational principles for warfare. Creativity “is an island of mystery on a sea or irrationality, even for devoted navigators on the waters.”[32] Innovation begins not in the pragmatically framed and historically measured, but the fantastically vivid and wildly unknown.
Figure 1 (below) provides one model for how innovation occurs within organizations to include military ones and is applied specifically for war in this treatment. However, the flow of ideas, experimentation, implementation, and reflective practice is how most any creative act might unfold in reality, whether one is fighting an adversary, creating a new song, or devising a new restaurant kitchen configuration. Figure 1 is a model, and all models are flawed abstractions and simplifications of what is a complex reality impossible to capture in one or all possible models. Models are, according to Daft and Weick: “a somewhat arbitrary interpretation imposed on organized activity. Any model involves trade-offs and unavoidable weaknesses.”[33] They can only offer a sliver of complex reality, and whether that glimpse is useful or not remains an ever-changing target of opportunity and risk for humans seeking stability, control, and prediction in a world where rarely such advantages are clear or realized.[34] Models are also only part of what constitutes a paradigm or socially constructed frame for how humans make sense of reality.
We use models to connect select theories we conceptualize about how the world functions, and we consider how the relationship of how theoretical processes work (or fail to work) when represented within a model and enacted through methods upon the external world. [35] For example, the cardiovascular community for decades applied medical theories and paired them to a “heart is a pump” model. Yet in the 1990s, a small movement broke from this and replaced the pump model with that of a “sponge”, which would later usher in new treatment techniques (methods) that reinforced some theories while encouraging the development of alternative theories. Conversely, flawed theories can be tossed out, but their original models might remain. Physicist Niels Bohr’s theory on atomic structure would later be disproven and replaced by advanced theories, yet his original model that atomic structures operate “like the solar system” remained. As Jaynes explains: “A model is neither true nor false; only the theory of its similarity to what it represents.”[36] The above figure thus offers a theory of innovation for how humans change their world and also their understanding of reality.
Fantastical thinking begins in abstraction, beyond any prescribed individual or institutional limits. This requires social paradigm awareness so that one can question why certain things or ideas are acceptable, while others are either unacceptable, or deemed irrational, or even impossible (Sheldon and Gray’s “flights of fancy” earlier). This requires a curiosity capable of conceptualizing beyond institutional limits once those barriers are realized and probed with “why-centric” questions. This rises to the level of philosophy, and requires social paradigm recognition and beyond that, the anticipation of paradigm shifting from one social paradigm to another.[37] Yet how one thinks logically throughout the innovative cycle requires further explanation.
Fantastical thinking requires abductive logic rather than the more familiar inductive or deductive reasoning. Deductive reasoning starts with generalized statements or rules, and correlates them to specific contexts, such as how Sherlock Holmes might deduce a criminal is left-handed by the shoe prints left at the scene (all left-handed people is the general, this unknown criminal is the particular). Inductive reasoning builds generalized conclusions based on specific contexts that operate in a particular hypothesis. If a person watches the weather report and a large storm is approaching, they may select to drive home earlier to avoid traffic delays (individual driving is particular, the historic pattern of stormy weather causing traffic delays is a generalized conclusion). Inductive and deductive reasoning operate within a premeditated configuration of hypotheses and proposed relationships whether general to specific, or specific to general. Abductive reasoning differs.
Abductive reasoning operates by forming and evaluating different hypotheses within a complex, dynamic reality where information is incomplete, puzzling (does not comply with inductive or deductive logics) or are transforming within an emergent system as one engages within it.[38] Inductive and deductive reasoning seek to apply general rules, while abductive reasoning works abstractly to find relationships that are systemic or systematic, and often form novel configurations not previously considered or known. Analytic reasoning pairs known solutions to new problems that seem from the onset to suggest some correlation, while abductive thinking transforms a system, dissolving the current problem while introducing entirely emergent ones in that act of innovation with systemic consequences.[39] Abductive reasoning involves improvisation which is oppositional to any premeditated or predetermined logical arrangements. Improvisation “deals with the unforeseen, it works without a prior stipulation, it works with the unexpected.”[40]Strategic formation rendered in deductive or inductive logics largely subscribe to some version of an ‘ends-ways-means’ ordering of reality where future preconceived goals (ends) rationally link to analytically optimized ways and means.[41] Strategic innovation does not function this way, except in incremental modes of building upon existing (legacy frame) constructs in support of the dominant war paradigm and institutionalized identity.
Abduction disrupts the foundations for any planning, and thus any pre-established goals and the ‘ends-ways-means’ configuration originally devised by ancient Greek natural philosophers.[42] Abductive reasoning is most applicable to complexity where emergence unfolds in nonlinear and novel, unpredictable ways. Complex systems function where the only constant is change, and abductive logic must move improvisationally through novel experimentation and ideation unbounded by legacy limitations or predetermined goal formation. Chia and Rasche expand on this with: “strategy making [concerning disruptive innovation] is largely improvisational; strategy slowly emerges through the internalized predispositions that actors refer to… strategy making does not simply take place in boardrooms.”[43] An incremental or evolutionary mode of strategic progress enacts the modern military paradigm, yet it does not encompass complex reality nor consider beyond these paradigmatic limits. Alternative paradigms provide vast areas for innovating strategies otherwise unreachable and unimaginable using this systematic, sequential, and narrow approach that shuns the fantastic in favor of the pragmatic.
The fantastic is the infinity of human abstraction, where every possible idea may inhabit a void that is otherwise ignored or unimagined. In ancient Chinese philosophy, this appears consistent with the functional emptiness that is the “latent background to all things- in the sense that one speaks of the background to a painting or a background of silence: that background constitutes a stock from which sound is produced and that makes that sound resonate, the stock from which a brush stroke emerges and thanks to which it can vibrate.”[44] The fantastic is where multiple social paradigms interact, overlap, or tangle into tensions and contradictions, yet also where a reflective practitioner might realize unimagined opportunities through shifting between paradigms and exploring the otherwise ignored.[45] Improvisational thinking and abductive logic charter this journey that quickly spill out of one paradigm into others, and back and forth. Gharajedaghi provides a useful summary of paradigm shifting that occurs in this initial exploration into the fantastic:
A shift of paradigm can happen purposefully by an active process of learning and unlearning. It is more common that it is a reaction to frustration produced by a march of events that nullify conventional wisdom. Faced with a series of contradictions that can no longer be ignored or denied and/or an increasing number of dilemmas for which prevailing mental models can no longer provide convincing explanations, most people accept that the prevailing paradigm has ceased to be valid and that it has exhausted its potential capacity.[46]
Gharajedaghi is addressing scientific paradigms in the above passage due to his point that one paradigm is prevailing, and another has additional potential capacity the invalidated one lacks. Kuhn’s original premise of scientific paradigms is where this original “paradigm shift” was conceptualized first, while social paradigm theory would develop later. Social paradigms, which encompass all possible human framings of war and warfare, can have paradigm shifts where a legacy or insufficient war paradigm is discarded in a cognitive shift to another one, or there can also be shifting between competing or incompatible war paradigms that remain operationalized by different groups within a conflict. For example, the modern Chinese war paradigm (that of a Sino-Marxist modern communist philosophy integrated with ancient Chinese philosophical influences) is maintained by Chinese political leadership and most of their military apparatus, while western democracies engage within a Westphalian, Clausewitzian, Newtonian war paradigm.
Refined ideation occurs as operators move from the fantastic into iterations of improvisation and experimentation, whether conceptual or using artifacts in controlled settings to test out potential outcomes.[47] These efforts require iterations in that conducting single event, isolated, or highly scripted scenarios will only produce performance outcomes that are limited to the rigid testing conditions designed by the evaluators. Returning to the U.S. Navy and Mitchell’s ambition to prove an emerging vulnerability to battleships, the naval exercise controllers sought to limit his experiment in ways that they felt could marginalize or eliminate any unwanted test results. In refined ideation, a reflective practitioner already understands their dominant war paradigm and when the institution may attempt to restrict or limit any refined ideation so that cherished beliefs (war epistemology, war ontology) are not confronted. It is in this overlap of what is known to exist and what is currently unfolding in a surprising, uncertain reality where the reflective practitioner seeks to sensemake and refine their ideation by testing new ideas against the system that includes institutional frameworks and beliefs.[48]
Operators move from the fantastic to the refinement of ideation iteratively, where thinking fantastically can lead to what sociologists such as Weick, Perin, and Locke term “hunch”, which can be similar to how we use words such as “concept” and “conjecture.” Weick explains hunches with: “Hunches… [are] an undifferentiated sense of something, as well as doubt shadowed by discovery. Hunches resemble poetic discourse that reconstitutes absent events.”[49] For Weick, a hunch is temporary and highlights the provisional nature of concepts and “the fact that they are substantial abridgements of perceptual reality.” [50] Hunches form in the fantastic, also termed “flux” by Weick where the vastness of human conceptualization is infinite and ever-changing, in some Shakespearian “airy nothing” that begins for materialize temporarily and as this occurs, new things are named. Yet “names that identify objects forever do not” in how flux flows and manifests.[51] The iteration between the fantastic (flux) and new hunches (refined ideation) is that during this unfolding, nonlinear journey, “people have a chance to redo their hunches, substitutions, and interpretations.”[52]
Most counterintuitively to those accustomed to the Ancient Greek logic of progressing from abstraction toward concrete, decisive action, Weick’s framing of sensemaking in complexity and Donald Schön’s complimentary “reflective practice” move in opposition, connecting hunches back to flux to practice generative doubt;[53] they insist that what might appear the most concrete is actually likely to be most abstract, and the opposite also true.[54] In one of the ways humans socially construct a rich, complex reality upon the existing naturally complex world consists of a shifting from thinking in a perceptually based, active exploration of reality to that of a categorically based knowing that seeks enhanced coordination and convergence.[55] In other words, as soon as we might establish a hunch in the ever-shifting flux, we seek to rigidly assign that name forever so that the fixed construct might be analyzed, categorized, and mechanized into systematic logic where known inputs link to clearly defined outputs.[56] When this happens, we move from innovation, improvisation, and experimentation to that of detailed planning and reverse-engineered, analytically optimized frameworks upheld by the institution.
Figure 1 has two large grey arrows illustrating two paradoxical, yet intertwined processes of convergence and divergence from the dominant paradigm. While some operators may employ a multi-paradigmatic approach to innovation (covered in the next chapter), a majority of practitioners will adhere to a single dominant or parent paradigm. Regardless of whether it involves a solitary or multiple social paradigms, the process of innovation features a tacit cognitive activity where one reflectively considers the institutional or group paradigm and then iteratively engages in part of the innovative process by converging to some of the legacy frame and core paradigmatic structures. One military example of this is found in the decades leading up to the 9–11 attacks where the U.S. Air Force sought to decommission the slow-moving, close air support A-10 fixed wing attack platform. Even after the 9/11 shift toward counterterrorism and counterinsurgency operations in regions where the enemy had no air forces or robust anti-aircraft capabilities, the Air Force continued to propose newer, multi-mission craft that should replace the aging ground support aircraft that was enormously popular with ground forces.[57] The Air Force feared losing air dominance, demonstrating paradigm convergence to their own service identity and deep philosophical position on complex warfare, and offered innovative options that adhered to these self-relevant positions.
The second gray arrow in Figure 1 is paradigm divergence and is the opposing yet intertwined process of challenging and disrupting the institutional paradigm so that innovative activities break through barriers that otherwise are forbidden spaces to think differently. Paradigm divergence requires the operator to reflect on that same legacy system and the dominant paradigm occupying the organizational mode for interpreting complex reality, so that one might improvise and ideate outside of these limits. An example of military forces innovating in paradigmatically divergent ways is found in how the Army and Navy struggled after the collapse of the Soviet Union.
The end of the Cold War ushered in identity crises for both services, in that the sudden fall of the primary military rival meant that “the Army lost its conceptual anchor. Having achieved a sort of victory without war, the Army faced mounting budget cuts and uncertainty as whether the driving concepts of the Cold War era would survive in a world where the United States stood alone as a superpower.”[58] The Navy too would experience this as their 1980s Maritime Strategy required a near-peer naval rival, and without the Soviet naval threat, “the concept of a global war at sea, such as that described in the Maritime Strategy, dissipated. So too did the apparent need for a high-end blue-water Navy.”[59] Both military services in the 1990s would spend the decade cycling through war paradigm divergence, while also iteratively looping back to efforts to converge with the earlier legacy frame, despite security challenges being quite different from the earlier Cold War context.