Climate Modeling

Table of Contents


Notes

With the focus of the world on climate change, researchers have been developing models with increasingly higher resolution and ever-greater integration of critical factors such as sea ice and clouds. Key to these modeling efforts has been the dramatic advance in computer power. This power has also enabled much longer simulations—from months to centuries. And over the next few decades, computers are expected to speed up by a factor of a million. One might well ask, as Dr. David Randall has, "What are we going to do with that next million?" His answer: "Run global cloud-resolving models," or GCRMs, that can be used for both numerical weather prediction and climate simulation. To this end, Dr. Randall and his team are developing a new global nonhydrostatic dynamical core based on a geodesic grid and suitable for use with grid spacings ranging from meters to hundreds of kilometers (figure 3). The geodesic grid has the advantage that all grid cells on the sphere are very nearly the same size, with the largest cells only about 5% larger in area than the smallest, thereby avoiding the problem of computational stability for advection. Moreover, contrary to popular belief, the geodesic grid does permit researchers to construct schemes of arbitrary higher- order accuracy; indeed, Dr. Randall's group has already used third-order- accurate finite-difference schemes. Components of the GCRM are being tested on the Cray XT4 at NERSC, and a high-resolution kernel of the model has exhibited good scaling performance on 10,000 processors.

Continued improvement of climate models also requires evaluation of their parameterizations, for instance for processes driven by subgrid features such as soil characteristics, vegetation, and land use. Dr. Rao Kotamarthi and his team of researchers at Argonne National Laboratory (ANL) and the University of Chicago are exploring the use of "data ensembles" generated through multiple runs to interpolate sparsely located surface measurements into a uniform spatial grid, thereby providing estimates of mean values over the entire domain containing the measurement sites. The method has been used successfully for interpolating surface sensible heat flux data from 14 sites within the DOE Atmospheric Radiation Measurement (ARM) program. The researchers next plan to address other measured parameters, many of which require at least a 10 year time series—some at one-minute intervals. The computational challenge is enormous: for example, at the 1 km resolution, there are approximately 100,000 interpolated locations and 99 simulations of the full time series at each location. To meet this challenge, Kotamarthi and his colleagues plan to use the Common Component Architecture—a component-based approach, supported by the DOE, in which units of software are encapsulated as components that interact with other components through well-defined interfaces.

Groundwater Modeling

One of the most challenging problems in environmental remediation involves hazardous materials that have leached into the subsurface and may be more widely dispersed by groundwater to sensitive water resource areas such as rivers and lakes. As part of the SciDAC groundwater science application area, researchers are developing new techniques to simulate radionuclide transport, in particular uranium, at the DOE Hanford 300 Area in the state of Washington. The task is complicated by the fact that the Hanford Unit is highly permeable and the groundwater has the potential of flowing rapidly with very small pressure gradients in the aquifer. This situation is further aggravated by the rapid fluctuations in the Columbia River, which produce changes, not only in magnitude but also in the flow direction. Uranium is leaching very slowly from the Hanford sediment governed by diffusive mass transfer, at levels that exceed the EPA maximum permissible concentration, prolonging its presence. Dr. Peter C. Lichtner is leading a multi-institutional SciDAC team whose members are developing a multiphase, multicomponent code called PFLOTRAN to simulate the variably saturated groundwater flow and reactive transport of uranium at the site. PFLOTRAN is based on a domain decomposition approach in which the computational problem is divided into subdomains, with one domain assigned to each processor. The Argonne-developed toolkit PETSc is used as a parallel framework for solvers and message passing within PFLOTRAN. According to Dr. Lichtner, "PETSc hides the communication from the user, thus allowing the application scientist to focus on the science (physics and chemistry in this case) rather than worry about the solvers and preconditioners needed." PFLOTRAN has already been run on a one-billion-node problem—an important proof of concept for petascale computing—and has been demonstrated to scale to 27,580 processor cores on the Jaguar XT3 Cray at Oak Ridge National Laboratory (ORNL).

Books

A Climate Modelling Primer

  • 1 Climate
    • 1.1 The components of climate
    • 1.2 Climate change assessment
    • 1.3 Climate forcings
    • 1.4 Climate feedbacks and sensitivity
    • 1.5 Range of questions for climate modelling
  • 2 A history of and introduction to climate models
    • 2.1 Introducing climate modelling
    • 2.2 Types of climate models
    • 2.3 History of climate modelling
    • 2.4 Sensitivity of climate models
    • 2.5 Parameterization climate processes
    • 2.6 Simulation of the full, interacting climate system: one goal of modelling
  • 3 Energy balance models
    • 3.1 Balancing the planetary radiation budget
    • 3.2 The structure of energy balance models
    • 3.3 Parameterizing the climate system for energy balance models
    • 3.4 A baic energy balance climate model
    • 3.5 Energy balance models and glacial cycles
    • 3.6 Box models - another form of energy balance models
    • 3.7 Energy balance models: deceptively simple models
  • 4 Computationally efficient models
    • 4.1 Why lower complexity?
    • 4.2 One-dimensional radiative-convective models
    • 4.3 Radiation: the driver of climate
    • 4.4 Convective adjustment
    • 4.5 Sensitivity experiments with radiative-convective models
    • 4.6 Development of radiative-convective models
    • 4.7 Two-dimensional statistical dynamical climate models
    • 4.8 Other types of copmutationally efficient models
    • 4.9 Why are some climate modellers flatlanders?
  • 5 General circulation climate models
    • 5.1 Three-dimensional models of the climate system
    • 5.2 Atmospheric general circulation models
    • 5.3 Modelling the ocean circulation
    • 5.4 Modelling the cryosphere
    • 5.5 Incorporating vegatation
    • 5.6 Coupling models: towards the AOBGCM
    • 5.7 Using GCMs
  • 6 Evaluation and exploitation of climate models
    • 6.1 Evaluation of climate models
    • 6.2 Exploitation of climate model predictions
    • 6.3 Integrated assessment models
    • 6.4 The future of climate modelling