We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Old climate models are often evaluated on whether they made correct predictions of global warming. But, if the old models were missing processes that we know now to be important, any correctness of their predictions would have to be attributed to a fortuitous compensation of errors, creating a paradoxical situation. Climate models are also tested for falsifiability by using them to predict the impact of short-term events like volcanic eruptions. But climate models do not exhibit the numeric convergence to a unique solution characteristic of small-scale computational fluid dynamics (CFD) models, like the ones that simulate flow over a wing. Compensating errors may obscure the convergence of individual components of a climate model. Lack of convergence suggests that climate modeling is facing a reducibility barrier, or perhaps even a reducibility limit.
Modern weather and climate prediction originated at the Institute for Advanced Study (IAS) in Princeton, New Jersey. Mathematician John von Neumann, a member of the IAS faculty, interacted with computing pioneer Alan Turing in the late 1930s and became involved in the construction of the first general-purpose digital computer, ENIAC, in the early 1940s. One of his goals was to use the computer to forecast weather using the equations of physics. He formed the Princeton Meteorology Group by hiring scientists with expertise in weather. In 1950, this group made the world’s first digital weather forecast. Two basic concepts from the philosophy of science – inductivism and deductivism – are introduced in the chapter to provide the context for the scientific developments being discussed. Von Neumann’s (thwarted) ambition of going beyond weather prediction to weather control is also discussed.
MIT professor Edward Lorenz made a serendipitous discovery about weather in the early 1960s, using a digital computer. He found that a small error in the initial conditions for a weather forecast could grow exponentially into a large error within days. This came to be known as the Butterfly Effect, one of the founding principles of chaos theory. To obtain initial conditions for weather forecasts, data from hundreds of weather balloons launched daily around the world are combined with satellite data through a procedure called data assimilation. The Butterfly Effect limits the useful period of weather prediction to about two weeks in advance. Carrying out an ensemble of weather forecasts, starting from different initial conditions, allows probabilistic predictions that extend beyond this limit. The philosophical concept of determinism is introduced, as is the concept of Laplace’s Demon, motivating the notion of a Climate Demon that predicts climate perfectly.
On the night of September 7, 1900, the residents of Galveston, Texas, then a coastal city of 37,000 located near Houston, went to sleep thinking it was a night like any other. The next morning, they woke up to the deadliest natural disaster in the history of the United States: the Great Galveston Hurricane.1 Galveston residents received no warning of the impending calamity. A 15-foot-high storm surge, driven by winds exceeding 130 miles per hour, inundated the city. More than 8,000 lives were lost.
The discovery of the Antarctic ozone hole in 1985 is chronicled, making the analogy with the concept of the “black swan,” a metaphor for unprecedented events. The ozone layer in the stratosphere acts as a shield for life on the Earth, by blocking harmful ultraviolet radiation from the sun. Concern mounted in the 1970s that compounds known as chlorofluorocarbons, used as refrigerants and coolants, could be slowly destroying the ozone layer. Joe Farman and the British Antarctic Survey (BAS) found that ozone levels in the stratosphere had dropped dramatically during the Antarctic spring season in the 1980s, creating a “hole” in the ozone layer. US scientist Susan Solomon led an expedition to make measurements of the ozone hole and worked with colleagues to explain the cause: cold Antarctic temperatures allowed the formation of polar stratospheric clouds that catalyzed chemical reactions involving CFCs, leading to rapid ozone loss.
The media often lends more credence to dramatic predictions from individual climate models than does the climate science community as a whole. Models are metaphors of reality; they should be taken seriously, not literally. Predictions made by simple climate models need to be confirmed using more comprehensive climate models. Deep uncertainty surrounds estimates of climate metrics like climate sensitivity, meaning that the error bars presented with these metrics may have their own unquantifiable error bars. It is important to make the distinction between precision and accuracy when evaluating uncertainty estimates of climate parameters. Overconfidence in numerical predictions of extreme climate scenarios may lead to a “doomist” belief that a climate catastrophe is inevitable and that there is nothing we can do to prevent it.
The greenhouse effect is a fundamental property of the Earth’s climate that is responsible for keeping the planet warm and habitable for life. The concept of the Goldilocks zone describing the habitability of planets is briefly introduced. Basic concepts in atmospheric physics, such as solar and infrared radiation, are introduced using concepts from everyday life, including an analogy with blankets to explain the greenhouse effect. The two main gases responsible for the greenhouse effect – water vapor and carbon dioxide – are described. The historical background of the discovery of the greenhouse effect, including the very first climate model that predicted global warming (formulated by Svante Arrhenius in the 1890s), as well as the recently uncovered role of Eunice Foote, is presented. Climate feedbacks, which can amplify the global warming, are explained.
Machine learning (ML) is a data-driven modeling approach that has become popular in recent years, thanks to major advances in software and hardware. Given enough data about a complex system, ML allows a computer model to imitate that system and predict its behavior. Unlike a deductive modeling approach, which requires some understanding of a system to be able to predict its behavior, the inductive approach of ML can predict the behavior of a system without ever understanding it in a traditional sense. Climate is a complex system, but there is not enough observed data describing an unprecedented event like global warming on which a computer model can be trained. Instead, it may be more fruitful to use ML to imitate a climate model, or a component of it, to greatly speed up computations. This will allow the parameter space of climate models to be explored more efficiently.
Franklin D. Roosevelt, First Inaugural Address, March 4, 1933
Each one of us is an author of the climate change story. How should we ensure that it does not end as a tragedy? How do we deal with the problem formerly known as climate change, and now referred to as the climate crisis, the climate emergency, the climate catastrophe, and the climatocalypse? How optimistic or pessimistic should we be in tackling global warming? The answer depends on personal beliefs and values – it is beyond the realm of pure science.
Climate modeling developed further at the National Center for Atmospheric Research (NCAR) in Boulder, Colorado. The laws of physics that form the foundation of weather and climate models imply strict conservation of properties like mass, momentum, and energy. A household budget analogy can be used to explain these conservation requirements, which are stricter for climate models as opposed to weather models. A mismatch in the energy transfer between atmospheric and oceanic models that were part of a climate model led to a correction technique developed in the 1980s known as flux adjustment, which violated energy conservation. Subsequent improvements in climate models obviated the need for these artificial flux adjustments. Now we have more complex models, known as Earth System Models, that include biological and chemical processes such as the carbon cycle. The concept of the constraining power of models is introduced.
The Rumsfeld knowledge matrix – which spans the knowledge categories “known knowns,” “known unknowns,” and “unknown unknowns” – is used to illustrate the process of model improvement. Two new knowledge subcategories – “poorly known unknowns” and “well-known unknowns” – are introduced to distinguish between accuracy of parameterizations. A distinction is made between “downstream benefits” of parameterizations, which improve prediction skill, and “upstream benefits,” which improve understanding of the phenomenon being parameterized but not necessarily the prediction skill. Since new or improved parameterizations add to the complexity of models, it may be important to distinguish between essential and nonessential complexity. The fourth knowledge category in the Rumsfeld matrix is “unknown knowns” or willful ignorance, which can be used to describe contrarian views on climate change. Contrarians dismiss climate models for their limitations, but typically only offer alternatives born of unconstrained ideation.
Climate modeling requires supercomputers, which are simply the most powerful computers at a given time. The history of supercomputing is briefly described. As computer chips became faster each year for the same price (Moore’s Law), supercomputers also became faster. But due to technological limitations, chips are no longer becoming faster each year. More chips are therefore needed for faster supercomputers, which means that supercomputers cost more money and consume more power every year. The newest chips, known as Graphical Processing Units (GPUs), are not as efficient at running climate models, because they are optimized for other applications like machine learning (ML). Extrapolating current trends in computing, we can expect future supercomputers that run the high-resolution climate models of the future to be very expensive and power hungry.