OPINION -- In 1990, the world was on the cusp of a major transformation. The bifurcated, static and hierarchical Cold War era was giving way to the entangled, dynamic and networked world of today. This emerging -- complex -- environment and the conundrums it posed was of the sort that had long been a theme of author Michael Crichton's work. And in November of that year, he published what might be his most enduring comment on the subject: Jurassic Park.
There's a key moment in Jurassic Park when the mathematician/chaotician, Ian Malcolm, refers disparagingly to the scientists who have -- too assuredly in his view -- cloned the dinosaurs: "They don't have intelligence. They have what I call 'thintelligence.' They see the immediate situation. They think narrowly and they call it 'being focused.' They don't see the surround. They don't see the consequences."
As a long-time intelligence analyst who spends a lot of time thinking and writing about the craft, that passage has long resonated with me. The way it characterizes the scientists' thinking sounds alarmingly similar to how the Intelligence Community (IC) -- and for that matter the larger national security enterprise -- tends to think. What Crichton called "thinking narrowly" and "being focused," we call "analysis." It's a wonderfully apt term for characterizing the IC's default cognitive mode as it derives from the Greek analyein, meaning to loosen, undo, dissolve, or resolve into constituent elements. As such, "analysis" perfectly embodies the highly reductive way in which the IC habitually thinks about the world.
Rules of analytic thought
There are four fundamental analytic heuristics or "rules of thumb," which from here on I'll simply refer to as the rules of analytic thought. Sometimes they're referred to as the rules of linear thought because they are rooted in the four characteristics of linear systems -- additivity, identifiable cause-and-effect, repeatability, and proportionality.
One, the Rule of Additivity. This rule is often stated as the whole is equal to the sum of its parts. It tells us we can understand a system of interacting parts by looking at the parts separately -- by analyzing them -- and then just adding them together.
Two, the Rule of Identifiable Cause and Effect. This rule tells us that if we look for clear cause-and-effect chains (X led to Y and Y led to Z), we should be able to see them -- often beforehand but almost certainly in retrospect.
Three, the Rule of Repeatability. This rule tells us that the way the system has been behaving is likely to be the way it will behave into the future. This is the rule that encourages us -- erroneously -- to extrapolate existing trends indefinitely into the future.
Four, the Rule of Proportionality. This rule, rather simply, tells us that a small input will result in a small output, and a large input will result in a large output.
These rules of analytic thought work well when applied to linear, complicated systems that are effectively discrete (bounded) and hierarchical (top-down and bottom-up). Consequently, they worked sufficiently well for understanding challenges such as the former Soviet Union or even the larger-scale Cold War, which constituted the modern IC's formative experience. More precisely, it's fair to say that these rules played a significant role in helping the IC provide US policymakers with enough of an understanding of the Soviet Union and Cold War to enable appropriately modulated -- again, linear -- behavior.
Are you Subscribed to The Cipher Brief's Digital Channel on YouTube? Watch The Cipher Brief's interview with CIA Director Bill Burns as he talks about The Middle East, Russia, China and the thing that keeps him up at night. Become a Cipher Brief YouTube Subscriber today.
Prompting Cognitive Failures
That said, these rules are not helping -- in fact they're hindering -- how we think about today's messier global security environment. Today's more nonlinear, complex environment is neither discrete nor hierarchical. It's unbounded and networked -- so it confounds those analytical rules. Consequently, if we rely on them to think about today's global system, it's almost guaranteed that we, and the policymakers we're trying to support, will misunderstand issues, and thus be frequently and unpleasantly surprised by the broader system's behavior.
Unfortunately, the IC's track record over the past 30 years only reinforces this conclusion. Since the end of the Cold War and during the profound global transformation that has followed, at least four major cognitive failures stand out.
Terrorism. We simplistically reduced terrorism (a systemic phenomenon), via rule one, to terrorists (actors) and the Global War on Terrorism (a unidimensional military response). And while we became very good at finding and eliminating many of those individual terrorist elements, we largely failed to understand terrorism as a broader systemic phenomenon -- as Hamas's surprise attack on Israel in October 2023 reminded us.
The 2008 Financial Crisis. All four rules came into play during the 2008 financial crisis. Rule one encouraged us not to see the system as a whole and consequently we missed -- or at least underestimated -- how one sector (for example, U.S. home mortgages) could render the entire global financial system vulnerable. Rule two promoted our failure to anticipate the nonlinear dynamics that would ripple across the entire system in an unconstrained and unpredictable manner. Rule three boosted our pre-existing belief that the system would just continue to hum along indefinitely -- until, to our surprise, it didn't. And finally, rule four promoted a false sense that the collapse of one financial institution couldn't trigger a disproportionate wave of disruptive impacts that would put the entire system in peril.
The Arab Spring. Less than three years after the financial crisis, the blunders associated with the Arab Spring are most obviously rooted in rule one, whereby we failed to appreciate and effectively communicate how the region was conditionally ripe for a political firestorm. This misunderstanding was reinforced by rule four, which made it hard to conceive how a unique event, such as a single fruit-seller lighting himself on fire in Tunisia, could rapidly spiral into regionwide upheaval.
The COVID-19 Pandemic. The fourth cognitive failure involved the response to the COVID-19 pandemic, including the associated massive supply chain disruptions, which might well be seen as a fifth failure. Like the financial crisis, the COVID-19 failure was rooted in all four rules. Most damning, the four rules impeded us from thinking imaginatively -- beforehand -- about what a pandemic might look like in a hyperconnected world and what we might do to prepare and possibly mitigate its impacts. Consequently, we in the IC found ourselves -- admittedly, like just about everyone else -- in a highly reactive position at a time when that is exactly where you don't want to be.
Fundamentally, all these failures to understand and anticipate the emergence and courses of each of the above issues were largely functions of thinking in excessively analytical terms -- of focusing on the component pieces of the system versus thinking in terms of a systemic whole. But before we discuss how to remediate this type of thinking, it's important to understand what we're talking about when we refer to emergent phenomena.
Emergent phenomena
At their essence, emergent phenomena are systemic macro-behaviors that grow organically out of complex, highly interconnected and interdependent systems; they "emerge" without top-down direction. All the phenomena described in the above failures -- the responses to terrorism, financial crises, political instability, and pandemics and supply chain disruptions -- are emergent. I use the present tense because it's important to recognize that none of them are, despite what we might wish to believe, consigned to the past; emergent phenomena are never really "solved" or "over." Rather, they evolve, morph, change, adapt, settle down, and flare again in new ways, shapes, or forms.
Not only are the above issues not gone, but a scan of the national security horizon tells us that many more are in play: climate change, infosphere contamination, urbanization, mass migration, inequality, extremism, and so forth. All told, the future of national security is rife with such phenomena. We need to get attuned to them.
Unfortunately, the IC struggles mightily with this prospect. For one thing, the IC was not designed with phenomena-based challenges in mind -- it was created to deal with discrete actor-based challenges. Today, however, due to expanding interconnectivity -- both physical and virtual -- few actors are truly discrete. Rather, they are components of a much larger, complex, system and it's not possible to understand them -- never mind effectively deal with them -- as somehow distinct from that bigger system.
We see this inclination to view actors in excessively discrete -- yes, analytic -- terms most prominently and problematically in our ever more breathless discourse regarding China. Consider, for example, the growing chorus of voices insisting that we must "win and not manage" the China challenge, like we did the Soviet challenge. What this argument misses is the fact that China, unlike the Soviet Union, is fully enmeshed in today's hyper-complex global system and thus the challenges it poses are substantially -- if counter-intuitively -- emergent. Which of course means, as mentioned above, it can't really be "solved," be "over" or for that matter be "won." Put differently, China is integral to all the aforementioned emergent challenges, and we will not be able to effectively address any of these phenomena -- or China itself for that matter -- without a good understanding of, and appreciation for, how China fits into the larger systemic picture.
Everyone needs a good nightcap. Ours happens to come in the form of a M-F newsletter that provides the best way to unwind while staying up to speed on national security. (And this Nightcap promises no hangover or weight gain.) Sign up today.
Rules of synthetic thought
So, what remedial measures must the IC take if it is to address its cognitive deficiencies? Foremost, the IC needs to admit that it has a problem that requires real change. Unfortunately, the cult of evolutionary improvement (better/stronger/faster/smarter) pervades the IC. Indeed, when discussing this with IC audiences, I often allude to the person at the beginning of a 12-step program who has yet to take the first, arguably most crucial, step: acceptance.
Next, I would propose adopting four "new" rules -- let's call them rules of synthetic thought -- keyed off the behavioral characteristics of nonlinear, complex systems. (Note: I use the term synthetic here in the philosophical sense and NOT the material sense meaning artificial.) These new rules would be added to the IC's "cognitive quiver" and used when thinking about truly complex environments and the emergent phenomena they generate.
The first new rule says that, in complex systems, the whole can be more (or less) than the sum of its parts. The essence of a complex system cannot be discerned from its distinct or discrete pieces but rather in the relationships -- the interconnections and interdependencies -- that make it a systemic whole. To more fully understand any complex system, we must see it in a "big picture" way and consequently think about it synthetically or holistically -- not analytically. Moreover, we need to remember that this cannot just be an exercise in adding or "racking and stacking," in IC vernacular -- the pieces together. Rather, we must use the subsequent three rules to help us ask questions that might allow us to imagine how the assorted pieces might holistically interact and behave.
The second new rule says that, in complex systems, cause-and-effect dynamics are often not readily identifiable, even in retrospect. What we often see is correlation, not causation. Moreover, any input in a complex system has more than one output -- there are always side, second order, or tertiary effects.
The third new rule says that a complex system is not repeatable. Even though circumstances might resemble what came before, we need to understand they are not the same. Analogical reasoning is a bedrock practice of traditional intelligence analysis, but we need to be very discerning in the analogies we make because despite certain obvious similarities -- say between the Soviet Union and today's China -- the differences are greater and more significant. This new rule should compel us to think very carefully about the blanket application of a "new Cold War" moniker to our relationship with China, especially the danger of defaulting to the "old Cold War" understanding and playbook. Bottom line: The circumstances with China are not a repeat of the Cold War.
Lastly, the fourth rule says that in complex systems, disproportionate input-output dynamics are common. Seemingly large inputs can sometimes be absorbed and dampened by the system or, conversely, relatively small inputs can be amplified quite substantially. In the case of the former, think of a bold policy measure that seemingly ends up having little or no observable effect; in the latter, think again of how the previously mentioned fruit-seller in Tunisia sparked a regionwide conflagration or how the collapse of Lehman Brothers investment bank precipitated a global financial meltdown.
These new rules, if used collectively as a prism through which to look at the complex security environment, can help us generate -- or rather synthesize -- insightful questions about potential systemic behaviors and better anticipate emergent security challenges. Note that I say "questions," not answers. Complex environments are inherently unpredictable and uncertain, as Crichton explained in Jurassic Park. Thus, the IC's traditional inclination to seek and provide definitive answers should not be the goal. Rather, forming better questions that enable us to better understand and thus anticipate possibilities is the goal. Undoubtedly, the ability of the new rules to help us do just that should make their adoption and implementation by the IC a priority.
Relevance or anachronism?
The future of the IC, then, is going to require the ability to think synthetically. Indeed, better analysis -- no matter how profoundly improved -- will not markedly boost the IC's ability to understand and anticipate the unpredictable dynamics of the emerging strategic security environment. Only intelligence synthesis can provide that ability, which is where the new cognitive rules come into play.
Ultimately, the IC -- and again, the larger national security enterprise of which the IC is merely a reflection -- does not really have a choice regarding adoption and use of the new rules. To go all in on the better/stronger/faster model of cognitive change - i.e., improved analysis - is to effectively opt for a Jurassic fate: extinction and its bureaucratic equivalent, irrelevance. Or, to put it more vividly, even vastly improved analysis -- without a robust synthetic complement -- will not spare us an anachronistic future as nothing other than excellent dinosaurs.
The Cipher Brief is committed to publishing a range of perspectives on national security issues submitted by deeply experienced national security professionals. Opinions expressed are those of the author and do not represent the views or opinions of The Cipher Brief.
Have a perspective to share based on your experience in the national security field? Send it to [email protected] for publication consideration.