Interactive Atlas IPCC

Interactive Atlas IPCC

Supporting the IPCC 6th Assessment Report

An overview of all the possible climate futures—a click away. That’s what the Interactive Atlas of the Working Group I of the IPCC enables. We were part of the IPCC's Atlas team, implementing the technical aspects of this website. This involved front- and back-end development, data processing, Quality Assurance and close co.development process with the IPCC authors and contributors. Overall, the Atlas grants access to 27 datasets (including global and regional), with 30 climate variables and derived indices.The whole process was undertaken jointly with IFCA-CSIC in the framework of PTI-Clima.

Overall, reached 445,000 users during the first week after its launch on August 9th, 2021 and currently has a steady flow of ~2500 daily users.


IPCC Working Group I Interactive Atlas from IPCC on Vimeo.

Key features

Tool supporting the IPCC AR6
Swift design
Hundreds of climate climate simulations
Trackable data processing through standard metadata

Climate information supporting IPCC AR6 assessment

Formed by climate change researchers, the IPCC task is to synthesize all of the information about climate change and make it available to policy makers and citizens all over the world. The Interactive Atlas is a novel tool from the first Working Group, the one dealing with the physical basis of climate change. As such, the Atlas comprises:

  • Global climate models: in particular, data coming from the Coupled Model Intercomparison Project —both phases 5 and 6 (CMIP5 and CMIP6). CMIP is an international collaboration designed to improve knowledge of climate change, that provides multi-model climate change projections at a worldwide scale. It also includes paleoclimatic data (PMIP4 and 5).
  • Regional climate models: to look at a regional scale, CORDEX downscales the data from global climate models down to different regions. In the Atlas, we make available all the different CORDEX domains (Europa, Africa, Antarctica and so on). Overall, this translates into hundreds of climate climate simulations across different geographical regions.
  • Climate observations: historical meteorological records with global or regional coverage are distributed across many different repositories. Therefore, the Atlas team decided to integrate data coming from multiple sources of meteorological records: Berkeley Earth or CRU TS. It also includes reanalysis: a numerical description of the recent climate, produced by combining models with observations. In this case, the Atlas includes datasets like ERA5, from the Copernicus Climate Change Service.
  • Variables and derived indices: Over 25 different climate indices can be consulted in the Atlas. From general variables like mean temperature or precipitation, to sector-specific indicators like heating degree days, or context information on the scenarios used to drive climate models like population and anthropogenic CO2 emissions. These variables and indices have been included to support the assessment done in the chapters, the Technical Summary and Summary for Policy Makers.
  • Timelines and reference periods: when exploring future climate change scenarios, it is important to have well-defined baselines to compare against. The Atlas allows the user to select among 5 different periods as baselines: from pre-industrial levels (1850-1900), to recent climatological periods (1995-2014).
  • Custom seasons: although most of us are used to the typical seasons (spring, summer, autumn and winter), some weather phenomena exhibit their own “seasons”. That’s the case for monsoons, that exhibit periodic variations across different parts of the world. To take this into account, the tool lets you select predefined seasons, or build your own.

Data management and Quality Assurance

Overall, processing and curating the datasets in the Interactive Atlas took over 1.5 million hours of computing. 171 years of computing time, executed in parallel. This process was undertaken jointly between the Atlas team, through the infrastructure maintained by IFCA. The process was as follows:

  • Acquiring the information: the team downloaded the datasets from ESGF. Although the original datasets had daily resolution, the team aggregated them into monthly resolution, to guarantee an easier handling.
  • Homogenising the information: to allow for intercomparison within the Atlas, each one of the datasets was regridded to fit a common grid. This was done following appropriate interpolation techniques through a process that takes into account the different projections for each dataset, and other particularities of the data sources. These particularities include dealing with nuances like leap and no-leap years.
  • Quality Assurance: a thorough curation of the data was undertaken, controlling the models for outliers and taking into account different inhomogeneities in the model outputs.
  • Leaving the information in an actionable format: some of the user needs required additional work on how to handle the datasets, to provide an usable download format for the data, such as GeoTIFF or netCDF. 

As a result, over 100 TBs of initial information were distilled down to a total of 1 TB of data, that is the total size of the data handled by the Interactive Atlas. To keep track of this whole process, all of the data available in the Interactive Atlas is accompanied by its corresponding metadata, that details the post-processing it went under. The standard format for this is Metaclip: a language-independent framework to keep track of climate products provenance. This ensures the transparency and accountability of the whole process.

Metaclip: metadata for climate information

Following FAIR principles of findability and reproducibility, all of the post-processing was tracked and coded through Metaclip: a standard metadata initiative to deal with climate data provenance. It allows the Interactive Atlas users to assess the quality, reliability and trustworthiness of the data they are using. Within the Atlas, these data can be consulted on the metadata tab, as seen on the image below.

METACLIP has its roots in the Resource Description Framework (RDF), defined by  the World Wide Web Consortium (W3C). It’s a standard model for data interchange on the Web and provides a semantic description for climate products (maps, plots, datasets…). In particular, it covers the following categories:

  • Datasource: describes the origin of the input data and the transformations the data has gone through, such as subsetting, aggregation, anomalies, PCA or climate indices. It also establishes the links between the different transformation commands and arguments in each step.
  • Calibration: encodes the metadata describing the statistical adjustments applied to the climate data: the bias adjustment you can apply through Climadjust; downscaling techniques; or other methods such as variance inflation or ensemble recalibration. The calibration vocabulary follows the framework designed by VALUE, a COST Action European initiative to systematically validate and improve downscaling methods in climate research.
  • Verification: establishes the the metadata related with the verification of seasonal forecast products, describing the verification measures applied, as well as describing the the verification aspect that each measure addresses.In addition, this vocabulary also provides a conceptual scheme to define other forms of climate validation.
  • Graphical: this aims at decribing graphical product, like charts and maps, including a characterization of uncertainty types and how they are communicated.

Uncertainty, accessibility and data visualisation

As important as the data itself is how we present it into the world. On the Interactive Atlas you will see a wide range of data visualisations: from classic climate change choropleths, to time series and climate stripes. One of the key worries of the Atlas team all throughout was how to ingrain the uncertainty of climate models into the visualisations. The first evident approach to communicating uncertainty is hatching: using parallel and crossed lines to mark out the areas of the map where the signal of the climate models is uncertain. And the Atlas provides two types of hatching:

  • Simple: linear hatching that covers the areas of the map where there is low model agreement.
  • Advanced: linear hatching is used to mark areas where there is no change or no robust signal, while cross hatching is used to cover areas where the models have conflicting signals.

In other visualisations, like the time series, uncertainty translates into plotting the whole ensemble of models, instead of just their mean, and shading in grey the periods of time where a global certain temperature (1.5 ºC, 2ºC) is reached.

The uncertainty communication is sprinkled throughout the broad variety of visualisations included in the Atlas. Climate stripes, seasonal climate stripes, Global Warming Levels… we wanted to offer a variety of visualisations, to explore different aspects of the data, and reach the different needs of the users. And to tackle this variety of needs, a tool like the Atlas has to keep accessibility in the middle of the design process. Between 5 and 10% of the world population is colour-blind. Users that are mobility-impaired, or vision-impaired and unable to see clickable elements in the page, rely on keyboard navigation. And we have kept all of this in mind. The colour palettes were discussed with the IPCC, to be colour-blind safe, and the tool is fully keyboard-navigable. In addition, our team at Predictia made sure that the interactive elements are supported by screen readers.

FAIR data

In this new cycle, the IPCC promoted transparency and reproducibility . And that is tied to following FAIR principles: making everything Findable, Accessible, Interoperable and Reusable. A big stress was put into the reusable, so the Atlas team made additional Jupyter notebooks available at their Github repository.

Count with us for your next project! Contact us to get more information: