To paraphrase Rodney Dangerfield, military theorist John Boyd can’t get no respect. The latest attack on the Loopy One, published in Armed Forces Journal, again mischaracterizes the OODA (Observe-Orient-Decide Act) Loop:
This notion that there are specific knowable causes that are linked to corresponding effects dominates military thinking and manifests in our drive to gather as much information as possible before acting. This concept was captured by Air Force Col. John Boyd’s decision loop: observe, orient, decide and act. In this OODA Loop, an endless cycle in which each action restarts the observe phase, it is implied that collecting information would allow you to decide independent of acting. Also implied is the notion that you can determine measures of effectiveness against which to observe each action’s movement toward achievement of your goal so you can reorient. The result of this type of thinking is to spend a lot of time narrowing the focus of what we choose to observe in order to better orient and decide. This drives one to try and reduce the noise associated with understanding the problem. We do this by establishing priority information requests or other methods of focused questions aimed at better understanding the core problem so we can control it.
This, as I will detail, significantly mischaracterizes the OODA. But why is an esoteric debate over an obscure military theorist relevant to the CIO, CISO, CTO, or network defender? Boyd’s theories–or at least a simplified version of them–have had a significant influence within the cyber community:
For cyber, Col. BJ Schwedo, commander of the 67th Network Warfare Wing, says that just as sometimes in a dogfight a fighter pilot may disengage in order to “rapidly roll back into the game” and re-engage, the future network must allow for this kind of agility in operations from its cyber warriors. If the network is too rigid, officials will “be in a corner with a very secure abacus and not be able to do our mission,” he says. Furthermore the OODA loop in cyber happens in seconds and minutes, underscoring the need for officials to be able to react to situations instantaneously.
Given the OODA’s prominence, it is important to clear up what it is, and what it is not. As I mentioned in my talk at the Boyd and Beyond Conference last weekend, metaphors and concepts are only useful if they are correctly understood and have explanatory power.
The OODA is neither linear nor prescriptive. One doesn’t, as the article implies, sit down and think “What does my OODA say about ordering lunch today?” It’s simply a model of decisionmaking that more or less occurs automatically. As the “full OODA” shows, it’s significantly more complex than simply a tactical model built on speed:
As seen in this diagram, the OODA Loop isn’t just one loop but a system of loops spinning simultaneously, governed by feedback and control. Each part of the Loop creates feedback that changes or influences another part. The most important part is the Orientation phase, since it exercises implicit guidance and control over the Observation and Action phases. Our Orientation–i.e the way we approach the problem–is thus crucial. It’s also the least controllable, as it depends on our ability to “destroy” our previous idea of the world as we understand it and rebuild it with taking disparate pieces of new information about our environment and forming new concepts, something Boyd called “building snowmobiles.” The key to the OODA is not speed but learning, which enables us to make progressively better decisions faster and easier than our adversaries.
Regarding the OODA Loop as based purely on speed of information processing will only produce confusion because it implies that the Loop is strictly linear, Orientation is merely a matter of narrowing down the selection of the “right” information, and the speed with which we go through these steps is all that matters. The picture below illustrates the popular (and inaccurate) view of the OODA Loop:
But as we know from personal experience, speed isn’t everything. Sometimes speed and instantaneous reaction causes us to make choices we deeply regret, or operate at a deficiency compared to someone who makes slower but better decisions. Hence the importance of having a sharp Orientation that can learn from the results of our actions and progressively make better decisions.
Taking the OODA Loop literally can be misleading for cyber defenders because a “faster, faster” model implies that speed of response is the most crucial aspect of network defense. There’s a lot more to it than that. Alex Olesker’s post on Dronegate, for example, is really about a failure of Orientation and an overemphasis on speed of action. The Air Force claimed that it detected the virus instantly and isolated it, in effect placing a premium on the ability of automated processes to improve their ability to instantly observe, orient, decide, and act to contain a virus. This is a response rooted in the “faster, faster” OODA Loop interpretation.
However, as Alex points out, this is hardly reassuring in light of the fact that a piece of plain vanilla malware was apparently able to penetrate so deeply into mission critical systems. Shifting focus on remediation and forensics, as well as sharing information about the malware, would enable them to better determine the source of the malware and build greater resilience into their systems. Thus, this shapes strategic learning within the organization that enables, upon the next cycle, a better ability to Observe, a sounder Orientation, and a corresponding ability to make sounder decisions under fire. This response, however, is not something that can be automated–it depends on organizational policy decisions made by CIOs as well as the soundness of network defense systems and processes.
The OODA Loop may be rather tricky to understand, but it’s also as mundane as morning coffee. Instead of thinking about the OODA when you go about your work, you’re better off simply remembering Boyd’s mantra of “people, ideas, and machines–in that order!”
For more on this topic see our reporting on National Security and Technology and at:
- MYCIN, Watson, and AI History - August 28, 2014
- Computers and History: Beyond Science Fiction - August 26, 2014
- Encyclopedia Dramatica And The Case Of The Satoshi Paradox - March 17, 2014