Warning is one critical way to avoid strategic surprise. To some degree it is used in all fields and by nearly all organizations. There are many specialized studies of it in different fields, including epidemiology, finance and national security. Some of the ideas in these fields can be usefully applied to the others. For example, risk analysis and Bayesian networks, developed in operations research and finance, have been imported into the warning programs of the intelligence community.
But there is a more basic prior question that has been given little attention. How does someone actually build a warning system? I mean this in the sense of how it fits in with other important factors, like other ways to deal with risk that do not rely on warning, and with overall strategy. This question is becoming more pressing. Various disasters, like September 11, 2001, the Asian tsunami, African famine and many others involve elements of warning, for sure. But they involve a lot more as well. Getting good warning is only the beginning of a process that has many other political and socio-bureaucratic elements to it. Ignoring this larger setting almost guarantees that warning will not perform well for the simple reason that no one will pay attention to it.
A related issue is that hundreds of billions of dollars are spent on warning technology – IT, satellites, software and sensors. This technology has transformed the structure and behavior of already complex organizations.