Contributors:  John Coll (NUIM), Ben Müller (LMU), Bas Crezee (ETH), Chiara Cagnazzo (CNR) and Chunxue Yang(CNR)

Reference versionDateModification summarySectionsMain editor
v2.0 03-03-2020Porting from EQC framework definition reportsFull documentBM

03-03-2020Including draft for improved adaptation section1.2-1.3BM

03-03-2020Adjusted all scores to reach a maximum of 62.1-2.5BM

11-03-2020

Introduce Unified Scoring texts

Shift CORE-CLIMAX reference to Addendum

Add Appendix tables

2.1-2.5

Addendum

Appendix

BM
V3.011-03-2020Improve adaptation section1.2-1.3JC

20-05-2020Adapt for user friendlinessFull documentBM

Table of Contents

1. Background

1.1 The maturity matrix approach

Data stewardship encompasses all activities that preserve and improve the information content, accessibility, and usability of data and metadata (NRC, 2007). Increasingly stringent regulations levying enhanced stewardship requirements on digital scientific data has increased the need for a formal approach by data stewardship practitioners. Therefore, the application of existing maturity assessment models is becoming a key practice component for data service centres.
Among its wider activities, the World Meteorological Organization (WMO) fosters collaboration to develop technical guidance and standards for the collection, processing and management of data and forecast products.  Part of this process is developing a tool that will enable data providers to assess and rate their datasets quantifiably based on internationally validated data stewardship best practices (SMM-CD Working Group, 2019). This assessment of the stewardship maturity of individual datasets is an essential part of ensuring and improving the way datasets are documented, preserved, and disseminated to users (Peng et al., 2019).  While it is challenging to do this consistently and quantifiably, such assessment of stewardship activities, as well as an assessment of the usage and application for any given data product are an essential component in the transition to high quality climate services provision.
In this wider context, a System Maturity Matrix (SMM) evaluates if the production of a data record follows best practices for specific aspects and facilitation of usage. For instance, the quality of a data product depends in part on the stewardship practices applied to it after its development and production. Therefore, in order to align the Evaluation and Quality Control (EQC) Service with evolving best practice and current global initiatives, a common and consistent framework for the application of SMM has to be established.  However, it follows that for any new product to attain full maturity across all the categories the SMM seeks to assess, there will be a lag between an initial dataset release and a product attaining maturity based on both stewardship and usage.
The SMM for Climate Data Records (CDRs) was developed under the Coordinating Earth Observation Data Validation for Reanalysis for Climate Services project (CORE-CLIMAX) (Schultz et al., 2017; Su et al., 2018). The aim was to develop a tool to assess different facets of a CDR, and to semi-objectively establish to what extent data record management based on accumulated best practices by the scientific and engineering communities have been met (EUMETSAT, 2014). Since many observing systems were designed to measure weather rather than for climate monitoring, various assumptions, approximations and associated uncertainties have to be characterized as part of the assessment process. In the SMM framework assessments are made in 6 major category areas and a score of 1 to 6 is assigned that reflects the maturity of the CDR with respect to a specific category;

  1. Software readiness
  2. Metadata
  3. User documentation
  4. Uncertainty characterization
  5. Public access, feedback, and update
  6. Usage

The major categories of the SMM are further subdivided into several minor categories and assessment scores are assigned based on scores in these minor categories. For each of these subcategories an assessment score of 1 to 6 is assigned with an overall aim of reflecting the maturity of the various aspects of the CDR. 
While ascertaining the maturity of climate data records involves many sensitive aspects, the best-practices approach of the CORE-CLIMAX projects has been met with community acceptance (Su et al., 2018).  The EQC Service applies the SMM concepts to a number of datasets within the CDS. Hereafter, we refer to the adapted approach as Maturity Matrix (MM) instead of System Maturity Matrix. Hence, the remainder of this document applies the MM support guidance notes with modification and some deviation where appropriate from the CORE-CLIMAX V4 documentation (EUMETSAT, 2014). For purposes of MM evaluation in this Service, this should be considered a key reference document.

1.2 MM adapted for EQC

The adapted MM is derived from the CORE-CLIMAX template (see Addendum at the end of the document) and the various categories therein, and here these have been adapted to fit the specific Service needs. This implies that; (a) we seek to make the MM applicable to satellite data, reanalysis, and in-situ data and, (b) we leave out categories that an expert user of the dataset is not able to assess without the benefit of knowledge input from the data providers. The overall aim then is to adopt current best practice by adapting available tools for our specific Service needs. To date much of our assessment activity has centered on the ERA5 reanalysis data, this reflects the order of datasets being made available on the CDS itself as the wider Service evolves. Consequently, as the range of data types available changes, approaches to maturity assessment adopted here will evolve with initiatives in wider best practice.
In line with these modifications the Service has explicitly dropped the following categories from the CORE-CLIMAX template due to the need of expert knowledge only a data provider can supply:

  • The "Software Readiness" category including all subcategories;
  • The "File Level" subcategory from the "Metadata" category;
  • and the "Formal Description of Operations Concept" subcategory from the "User Documentation" category.


There are templates available in the Appendix for the preparation of the full MM. Once all scores in the various MM categories we evaluate are entered, we are then able to generate an overview of the maturity of the dataset as a summary for the prospective user. 

2.  Assessor Guidance Notes

For convenience and to enable having a synthesis of information available in one location, in the subsequent sections of this document below, the relevant sections of the CORE-CLIMAX guidance are reproduced with modification as appropriate to the Service needs. 
Therefore, what is provided below is an adapted summary of Sections 4.2 to 4.6 of the CORE-CLIMAX guidance (see Addendum). Where relevant, additional content or notes have been added for clarification. For a template on completing the 'Defensible Traces' category in the spreadsheet see the Appendix of the collated information here. For clarity in cross-referencing back to the original information CORE-CLIMAX manual Section 2.1 (and sub-sections) starts with Metadata; Section 2.5 (and sub-sections) relates to Usage.
Important note: The tables in the subsections below give guidance on how to score the different categories. The scores are supposed to be given based on objective criteria; any personal knowledge or communication is not supposed to affect the scoring decision. The scores are supplemented with a suggested formulation of the wording to be used for the defensible traces in the maturity matrix assessments. These formulations should primarily be used insofar as it is possible to produce homogenized report sections, both in terms of the scores assigned and the qualifying statements made. If the formulations do not fit a special case that might occur for specific datasets, the author of a report should make use of the mandatory terms for the individual formulation. If an author cannot make use of all mandatory terms, it is probable that the score in question cannot be reached by the dataset. The author also has the option to add extra notes to the formulations given, if necessary, such as defining the mentioned standards or highlighting a specific reference. 

2.1 Metadata

Metadata is 'data' about data and data providers are responsible for providing the metadata which is applicable to their product, and ideally metadata should be standardized as completely as possible. In this category the maturity is assessed using three minor categories that consider the standards used, the metadata at the collection level, i.e., valid for the complete data record and at file level, i.e., valid for the data at a specific granularity.

2.1.1 Standards

It is considered to be good practice to follow international standards such as; ISO-19115-1:2014 (https://www.iso.org/standard/53798.html), Climate and Forecast (CF: http://cf-pcmdi.llnl.gov/), or organizational such as NOAA/NCDC, or the Marine Environmental Data and Information Network (MEDIN: http://www.oceannet.org/).

Table 2: The 6 maturity scores in sub-category Standards.

Score

Description

Suggested formulation

Mandatory terms

1

Not assigned

-

-

2

Not assigned

-

-

3

No standard considered

No standards were recognized for the dataset.

"No standards"

4

Metadata standards identified and/or defined but not systematically applied

There are standards identified/defined within the metadata and/or documentation of the dataset. Though, the standards are not systematically applied.

"standards identified/defined",
"not systematically applied"

5

Score 4 + standards systematically applied at file level and collection level by data provider. Meets international standards for the dataset

There are standards identified/defined within the metadata and/or documentation of the dataset. These standards are systematically applied and international standards are met.

"standards identified/defined",
"systematically applied",
"international standards"

6

Score 5 + metadata standard compliance systematically checked by the data provider.

There are standards identified/defined within the metadata and/or documentation of the dataset. These standards are systematically applied and international standards are met. Furthermore, the compliance is coherent for all datasets. 

"standards identified/defined",
"systematically applied",
"international standards",
"coherent for  all datasets"


Additional information:
The assessment can be made as follows:
Score 3: No standards identified;
Score 4: Standard identified/defined means that the data record producer has identified or defined the standard to be used but has not applied it. The information about this most often can be found in Format description documents available from web pages or from statements on web pages;
Score 5: A systematic application requires that you can find it in every file of the data product and descriptions;
Score 6: This means that the data provider has implemented procedures to check the metadata contents; 

2.1.2 Collection Level

These are attributes that apply across the whole of the data set, such as digital object identifier, processing methods (e.g., same algorithm versions), general space and time extents, creator and custodian, references, processing history. Discovery metadata is part of this, which is a list of information that allows other people to find out what the data set contains, where it was collected and where and how the data record is provided.

Table 3: The 6 maturity scores in sub-category Collection Level.

Score

Description

Suggested formulation

Mandatory terms

1

None

There are no standardized attributes on the collection level of the dataset.

"no standardized attributes on the collection level"

2

Limited

There is only a very limited set of standardized attributes on the collection level of the dataset, e.g. [list of attributes].

"very limited set of standardized attributes on the collection level"

3

Sufficient to use and understand the data independent of external assistance; Sufficient for data provider to extract discovery metadata from meta data repositories

The standardized attributes on the collection level of the dataset are sufficient to understand the data's origins without further documents.

"standardized attributes on the collection level", "sufficient to understand the data's origins"

4

Score 3 + Enhanced discovery metadata

The standardized attributes on the collection level of the dataset are sufficient to understand the data's origins without further documents, including information on how to obtain raw data and its preprocessing procedures.

"standardized attributes on the collection level", "sufficient to understand the data's origins", "obtain/preprocess raw data"

5

Score 4 + Complete discovery metadata meets international standards

The standardized attributes on the collection level of the dataset are sufficient to understand the data's origins without further documents, including standardized information on how to obtain raw data and its preprocessing procedures.

"standardized attributes on the collection level", "sufficient to understand the data's origins", "standardized information on obtaining/preprocessing raw data"

6

Score 5 + Regularly updated

The standardized attributes on the collection level of the dataset are sufficient to understand the data's origins without further documents, including standardized information on how to obtain raw data and its preprocessing procedures. A [placeholder for the record/recording] of the history about updates of the dataset is available in the metadata.

"standardized attributes on the collection level", "sufficient to understand the data's origins", "standardized information on obtaining/preprocessing raw data", "history about updates in metadata"


Additional information:
The assessment can be made as below:
Score 1: Data files have no global attributes;
Score 2: Only attributes like space and time coverage and the custodian of data are provided, but no information on measurement/processing methods or history are available;
Score 3: All relevant information on processing (for example retrieval input radiance data version and provenance) and for general understanding the data (such as references and comments). Also contains information on how to extract discovery metadata from repositories;
Score 4: Score 3 + more information on discovery metadata (for example, how to obtain raw data (level 0 in case of satellites) and the necessary information to process those data);
Score 5: Score 4 + all the available information on the data are provided with the data using a defined standard;
Score 6: Score 5 + Updates are provided whenever new metadata become available. For example, information on events impacting the quality of the data record (e.g., information provided at https://www.ospo.noaa.gov/Operations/POES/status.html), or the addition of commentary metadata such as publications written about the data record.

2.2 User documentation

Documentation is essential for the effective use and understanding of a data record. There are four minor categories to assess the completeness of user documentation.

2.2.1 Formal Description of Scientific Methodology

This refers to a description of the physical basis of measurements; the processing of the raw data to higher levels (in the case of satellite data this involves geo-location, calibration, inter-calibration, retrieval methods, and space-time averaging methods). For station based data records this can be descriptions of data filtering, corrections and aggregation procedures etc. For reanalysis this would include the description of data assimilation techniques, the physical model used, etc. An example of a formal description is an Algorithm Theoretical Baseline Document (ATBD) as e.g., provided for a satellite retrieval algorithm. As such documents are most often subject to an agency internal review process it is required to also have a peer reviewed publication(s) on the methodology to increase the maturity.

Table 4: The 6 maturity scores in sub-category Formal Description of Scientific Methodology.

Score

Description

Suggested formulation

Mandatory terms

1

Limited scientific description of methodology available from PI

The scientific description is limited and not publicly available without contacting the data provider or only from non-peer-reviewed literature

"limited scientific description", "no published description"

2

Comprehensive scientific description available from PI and Journal paper on methodology submitted

The scientific description is comprehensive but not publicly available without contacting the data provider or only from non-peer-reviewed literature. There is a methodological journal paper submitted but not yet published. 

"comprehensive scientific description", "description submitted as paper"

3

Score 2 + Journal paper on methodology published

The scientific description is comprehensive but not publicly available without contacting the data provider or only from non-peer-reviewed literature. There is a peer reviewed methodological journal paper published.

"comprehensive scientific description", "description published as paper"

4

Score 3 + Comprehensive scientific description available from Data Provider

The scientific description is comprehensive and publicly available in the form of a [scientific report/ATBD]. There is also a peer reviewed methodological journal paper published.

"comprehensive scientific description in a scientific [scientific report/ATBD]", "description published as paper"

5

Score 4 + Comprehensive scientific description maintained by data provider

The scientific description is comprehensive and publicly available in the form of a [scientific report/ATBD]. The description is kept up to date with the updated dataset. There is also a peer reviewed methodological journal paper published.

"comprehensive scientific description in a scientific [scientific report/ATBD]", "description is updated", "description published as paper"

6

Score 5 + Journal papers on product updates published

The scientific description is comprehensive and publicly available in the form of a [scientific report/ATBD]. The description is kept up to date with the updated dataset. There is also a number of peer reviewed methodological journal papers published in parallel with the dataset updates.

"comprehensive scientific description in a scientific [scientific report/ATBD]", "description is updated", "description published as paper", "paper[s] on updates"


Note: Please provide references with the defensible traces texts.
Additional information:
EXAMPLE: Satellite retrieval algorithm:
Score 1: Draft of ATBD for the retrieval algorithm is available, e.g. on the Internet. To assess it one would search the web pages on the data record for an ATBD;
Score 2: Complete version of ATBD(s) is available which includes all the steps which were used to produce the data set from basic measurements to the final product. The method is also summarized and submitted to a relevant journal for publication. The latter can be hard to assess from outside but often submitted papers appear on web pages of existing data records;
Score 3: In addition to Score 2 a journal paper is available which can be checked using tools such as Web of Science;
Score 4: ATBD is available from the data provider, e.g., if a data record is transferred   from a research group (which is part of sustaining the data record measured by the maturity) to an operational/research agency that takes responsibility for production and/or distribution of the data record the documentation of the methodology shall appear on the data provider's web site. It is assumed that the documents have passed each agency's internal review processes before they appear. To assess this, one needs to browse the data provider's web site.
Score 5: This score is related to updates of the documentation following updates of the   data record (see Public Access, Feedback and Update). A sign for maintenance is if the ATBD has proper document version numbering and is referring to a specific version of the data record;
Score 6: The ultimate score in this example is that each update in the retrieval algorithm is also published in peer reviewed literature, i.e., accepted by the community through the anonymous review process developed by the community.
Note: In case of in situ data sets or reanalysis ATBD may not be the name of the document. In that case measurement manual, post-processing manual, model descriptions or other technical reports can have the same functionality as the ATBD. It is however required that a description of the method is available to the public.
EXAMPLE: Reanalysis:
Score 1: Draft of data assimilated and the physical models used is available, e.g. in Internet. To assess it one would search the web pages on the data record;
Score 2: Complete version of data assimilation techniques and the physical model used(s) are available, covering all the steps which were used to produce the data. The method is also summarized and submitted to a relevant journal for publication. The latter can be hard to assess from outside but often submitted papers appear on web pages of existing data records;
Score 3: In addition to Score 2 a journal paper is available which can be checked using tools such as Web of Science;
Score 4: The physical model and the data assimilation techniques used are available from the data provider. It is assumed that the documents have passed internal review processes before they appear. To assess this, one needs to browse the data provider's web site.
Score 5: This score is related to updates of the documentation following updates of the   data record (see Public Access, Feedback and Update). A sign for maintenance is if the reanalysis version has proper document version numbering (e.g. IFS Documentation CY41R2), and if data updates are documented;
Score 6: The ultimate score in this example is that each update in the production chain is fully documented in peer reviewed literature and/or through institutional websites, as necessary.

2.2.2 Formal Validation Report

A Formal validation report contains details on the validation activities that have been done to assess the fidelity of the data record. It describes uncertainty characteristics of the data record found through the application of uncertainty analysis (see section on Uncertainty Characterization), and provides all relevant references.

Table 5: The 6 maturity scores in sub-category Formal Validation Report.

Score

Description

Suggested formulation

Mandatory terms

1

None

There is no validation report available.

"no validation report"

2

Report on limited validation available from PI

Validation information is limited and not publicly available without contacting the data provider or only from non-peer-reviewed literature.

"limited validation information", "no published information"

3

Report on comprehensive validation available from PI; Paper on product validation submitted

Validation information is comprehensive but not publicly available without contacting the data provider or only from non-peer-reviewed literature. There is a journal paper on product validation submitted but not yet published. 

"comprehensive validation information", "validation submitted as paper"

4

Report on inter-comparison to other CDRs, etc. Available from PI and data Provider; Journal paper on product validation published

Inter-comparison report is comprehensive and publicly available. There is a peer reviewed journal paper on product validation published.

"inter-comparison report publicly available", "validation published as paper"

5

Score 4 + Report on data assessment results exists

Inter-comparison and data assessment report is comprehensive and publicly available. There is a peer reviewed journal paper on product validation published.

"inter-comparison and data assessment report publicly available", "validation published as paper"

6

Score 5+ Journal papers more comprehensive validation, e.g., error covariance, validation of qualitative uncertainty estimates published

Inter-comparison and data assessment report is comprehensive and publicly available. There is a peer reviewed journal paper on extended product validation published.

"inter-comparison and data assessment report publicly available", "extended validation published as paper"


Note: Please provide references with the defensible traces texts.
Additional information:
EXAMPLE: Satellite retrieval of temperature profiles 
Score 1: No validation is done and hence no report;
Score 2: Report on limited validation done using sounding data from a few stations is available by directly asking the PI or from PI's web pages;
Score 3: Detailed report on validation using radiosonde profiles with global representativeness in space and time. Quality controlled radiosonde data such as IGRA or data from reference upper air stations such as GRUAN has been used for validation. PI has also submitted an article on the product and its validation to publish in a peer-review journal. In most cases the report and the submitted article can be found on PI's web pages or it can be obtained by asking the PI;
Score 4: Reports on inter-comparisons to other satellite derived temperature profile data sets are available at this stage both from PI and the data provider. Article submitted on validation is now published and is available from PI/data provider's web page and listed in e.g., Web of Science;
Score 5: The data record has appeared in assessment reports such as from GEWEX;
Score 6: More papers on uncertainty characterization are published and data set developer/provider maintains up-to-date information on uncertainty in their data records. Below we give two examples:
Remote sensing systems maintain a web page for describing uncertainty in their upper air temperature data set:
http://www.remss.com/measurements/upper-air-temperature#Uncertainty
Met Office Hadley Centre maintains a web page for describing uncertainties in their sea surface temperature data set:
http://www.metoffice.gov.uk/hadobs/hadsst3/uncertainty.html
Both pages contain a comprehensive list of peer-reviewed publications documenting uncertainties in these data sets.

2.2.3 Formal Product User Guide (PUG)

This section evaluates a definition of the data set, requirements considered while developing the data set, overview of input data and methods, general quality remarks, validation methods and estimated uncertainty in the data, strength and weakness of the data, format and content description, references, and contact details.

Table 6: The 6 maturity scores in sub-category Formal Product User Guide.

Score

Description

Suggested formulation

Mandatory terms

1

Not assigned

-

-

2

None

There is no formal Product User Guide (PUG) for the dataset.

"no formal Product User Guide (PUG)"

3

Limited product user guide available from PI

There is only a limited formal Product User Guide (PUG) for the dataset available only by contacting the data provider.

"limited formal Product User Guide (PUG)"

4

Comprehensive User Guide available from PI

There is a comprehensive formal Product User Guide (PUG) for the dataset available only by contacting the data provider.

"comprehensive formal Product User Guide (PUG)"

5

Score 4 + available from data provider

There is a comprehensive formal Product User Guide (PUG) for the dataset publicly available.

"comprehensive formal Product User Guide (PUG) publicly available"

6

Score 5 + regularly updated by data provider with product updates and/or new validation results.

There is a regularly updated comprehensive formal Product User Guide (PUG) for the dataset publicly available.

"comprehensive formal Product User Guide (PUG) publicly available", "regularly updated"


Note: Please provide references with the defensible traces texts.
Additional information: 
The assessment can be made as follows:
Score 2: PI has not written a user guide yet;
Score 3: A draft user guide may be available from PI directly or from PI's web pages;
Score 4: A complete and reviewed (for example by the data provider) user guide is available from PI's webpage. At this stage the user guide shall contain all details given in the above paragraph;
Score 5: Score 4 + user guide is available from data provider's web page as well;
Score 6: Updated user guide is available from data provider's web page.  A sign of updating is increasing version numbering. This is related to updates in the data record itself;

2.3 Uncertainty Characterization

This category assesses the practices used to characterize and represent uncertainty in a data record. Four minor categories are considered that try to encompass standards used, the validation process, how uncertainty is quantified and if an automated quality monitoring is implemented that increases the efficiency of production and validation. 
For the categories "standards" and "validation"; publications, reports, or metadata may be sufficient. For the "uncertainty quantification", it is important that all given uncertainty data is also directly connected to the actual data within the same file (or at least supplemental files)! 

2.3.1 Standards

There are no international standards as such available for uncertainty characterization. However, there is a compelling need for this. Organizational and program standards are sometimes available (e.g., NOAA CDRP). There are basically two areas where standards play an important role:

  • Uncertainty nomenclature which should follow established definitions such as those provided by metrological institutions. Thus for instance provide a summary of validation activities performed for the product, and then provide a summary of systematic and random uncertainty of the product and how these vary with space, time and state. In particular, information on temporal stability of the data which is an indication of whether the data can be used for longer term variability and trend analysis is useful.

Further information on aspects of uncertainty in measurement can be found in the 2017 GAIA-CLIM project Deliverable (2017); http://www.gaia-clim.eu/sites/www.gaia-clim.eu/files/document/d2_6_final.pdf 

  • SI traceability is the property of the result of a measurement or the value of a standard whereby it can be related to stated references usually relating to national or international standards, through an unbroken chain of comparisons, all of which have stated uncertainties.

The first bullet point above is indicating that emphasis should be put on the usage of existing and correct definition of uncertainty measures to make results from validation studies concerning the same ECV more comparable.
To support a claim of traceability, the provider of a measurement result or value of a standard must document the measurement process or system used to establish the claim and provide a description of the chain of comparisons that were used to establish a connection to a particular stated reference. For satellite data records the second bullet point above provides a practical indication that uncertainty arising from systematic and random effects in the measurements should be provided for each step of the product generation, these include e.g.: pre-launch and post-launch calibrations as well as inter-calibration of instruments, retrieval, sampling, and aggregation steps.  Ultimately traceability should be related to reference data such as laboratory measurements, those from reference measurements such as the Global Reference Upper Air Network (GRUAN) or data from high spectral resolution and stable space-based instruments such as AIRS/IASI may be used to characterize uncertainties. As absolute references are not readily available measurements may be taken as reference if their accuracy is about one order of magnitude better compared to the measurement that is assessed.
For in situ data traceability can be established by calibrating networks of measurement devices by comparing the instruments with the in laboratory reference instrument or through measurement device inter-comparison activities.
It is acknowledged that for reanalysis systems SI traceability is difficult to establish. However, it can be assessed if the quality of input data to assimilation systems is characterized in a traceable manner and also if the estimates of uncertainty provided are used in the data assimilation process or other usage of data, e.g., as boundary condition in ensemble model runs.
The 'SI' element of the traceable means that any unit used shall be traceable back to the seven well-defined base units of the SI system, i.e.; the meter, the kilogram, the second, the ampere, the Kelvin, the mole, and the candela.

Table 7: The 6 maturity scores in sub-category Standards.

Score

Description

Suggested formulation

Mandatory terms

1

None

No standards were recognized for the uncertainties of the dataset.

"No standards"

2

Standard uncertainty nomenclature is identified or defined

Uncertainty information has a standard nomenclature definition.

"standard nomenclature definition"

3

Score 2 + Standard uncertainty nomenclature is applied

Uncertainty information follows  standard nomenclature.

"follows standard nomenclature"

4

Score 3 + Procedures to establish SI traceability are defined

Uncertainty information follows standard nomenclature. The reference data for the uncertainty calculations can be identified.

"follows standard nomenclature", "reference data identified"

5

Score 4 + SI traceability partly established

Uncertainty information follows standard nomenclature. The reference data for the uncertainty calculations can be identified and the approach is of limited traceability.

"follows standard nomenclature", "reference data identified", "limited traceability"

6

Score 5 + SI traceability established

Uncertainty information follows standard nomenclature. The reference data for the uncertainty calculations can be identified and the approach is fully traceable.

"follows standard nomenclature", "reference data identified", "fully traceable"


Additional information:
The assessment can be made as follows:
Score 1: Nothing has been done in early stages of development;
Score 2: The data provider states in the documentation or on web pages which nomenclature is used but no consistent application of it can be verified;
Score 3: Score 2 + the application of the nomenclature is evident from documents such as validation reports and user guides;
Score 4: Score 3 + a document exists that describe how traceable comparison chains to a specified reference will be established;
Score 5: Score 4 + the steps in the aforementioned document are implemented as far as possible. It is known that in particular for satellite measurements no real references in space are existing but if an unbroken chain of comparisons to the best available instrument is established Score 5 can be assigned;
Score 6: Score 5 + the traceability is fully established. Maybe no existing data record will reach Score 6 until real reference measurements in space are provided but by not achieving it the need remains always documented.
Note: The maturity levels start with the nomenclature and finish with the SI traceability because this presents the logical order in which a system to quantify and present uncertainty would be built.

2.3.2 Validation

This minor category evaluates the extent to which the product has been validated to provide uncertainty estimates.

Table 8: The 6 maturity scores in sub-category Validation.

Score

Description

Suggested formulation

Mandatory terms

1

None

No validation was recognized for the dataset.

"No validation"

2

Validation using external reference data done for limited locations and times

A limited validation based on a small set of reference data is reported. 

"limited validation"

3

Validation using external reference data done for global and temporal representative locations and times

A validation based on a representative set of reference data is reported. 

"representative set of validation data"

4

Score 3 + (Inter)comparison against corresponding CDRs (other methods, models, etc.)

A validation based on a representative set of reference data is reported. There is also a comparison with corresponding climate data records available.

"representative set of validation data", "comparison with corresponding climate data records"

5

Score 4 + data provider participated in one international data assessment

A validation based on a representative set of reference data is reported. There is also a comparison with corresponding climate data records available and participation in an international data assessment is documented.

"representative set of validation data", "comparison with corresponding climate data records", "participation in an international data assessments"

6

Score 4 + data provider participated in multiple inter-national data assessment and incorporating feedbacks into the product development cycle

A validation based on a representative set of reference data is reported. There is also a comparison with corresponding climate data records available and participation in multiple international data assessments is documented. Feedback is incorporated in the development cycle.

"representative set of validation data", "comparison with corresponding climate data records", "participation in multiple international data assessments", "feedback incorporated in the development cycle"


Additional information:
The assessment can be made as follows:
Score 1: New product and no validation activity has been performed;
Score 2: Unlikely to be applicable to reanalysis products.
Score 3: Unlikely to be applicable to reanalysis products.
Score 4: Score 3 + comparisons are made with other satellite derived temperature products using different retrieval technique and/or reanalysis data sets;
Score 5: Data provider participated in an international data quality assessment. For example, GEWEX did assessments for cloud properties and radiation fluxes where a team produces multi data record comparison results that are reviewed by an independent panel;
Score 6: Data provider participated regularly in more than one data quality assessment   and results resulting in improvement of the data record.

2.3.3 Uncertainty Quantification

This minor category evaluates the extent to which uncertainties have been quantified.

Table 9: The 6 maturity scores in sub-category Uncertainty quantification.

Score

Description

Suggested formulation

Mandatory terms

1

None

No uncertainty quantification was recognized for the dataset.

"No uncertainty quantification"

2

Limited information on uncertainty arising from systematic and random effects in the measurement

A limited uncertainty quantification of systematic and random effects is available. 

"limited uncertainty quantification"

3

Comprehensive information on uncertainty arising from systematic and random effects in the measurement

A comprehensive uncertainty quantification of systematic and random effects is available. 

"comprehensive uncertainty quantification"

4

Score 3 + quantitative estimates of uncertainty provided within the product characterizing more or less uncertain data points

A comprehensive uncertainty quantification of systematic and random effects is available. There is quality information/a quality flag available for the dataset.

"comprehensive uncertainty quantification", "quality information/a quality flag available"

5

Score 4 + temporal and spatial error covariance quantified

A comprehensive uncertainty quantification of systematic and random effects is available, including spatio-temporal error covariance. There is quality information/a quality flag available for the dataset.

"comprehensive uncertainty quantification", "quality information/a quality flag available", "spatio-temporal error covariance"

6

Score 5 + comprehensive validation of the quantitative uncertainty estimates and error covariance

A comprehensive uncertainty quantification of systematic and random effects and its validation is available, including spatio-temporal error covariance. There is quality information/a quality flag available for the dataset.

"comprehensive uncertainty quantification with validation", "quality information/a quality flag available", "spatio-temporal error covariance"


Additional information:
The assessment can be made as follows:
Score 1: No validation and therefore no uncertainty quantification.
Score 2: Only limited information on uncertainty because of limited validation;
Score 3: Comprehensive information is available and also included in the dataset so that the nature of uncertainty is well understood, e.g. whether uncertainty is varying depending upon geographic region, state, and instrument geometry. Uncertainties are estimated for each step of the production, thus e.g., the uncertainty contributions in temperature profile data set from radiometric noise in the input satellite measurements, radiative transfer modelling and retrieval errors. Uncertainty can also arise from sampling errors e.g.: due to non-availability of data in the presence of clouds or precipitation, smoothing errors due to insufficient horizontal and vertical resolutions of the instruments, or comparison of basic parameters for different ensemble members in the case of reanalysis.
Score 4: Score 3+ quantitative comprehensive information as described in Score 3 is available for each data point;
Score 5: Score 4+ the spatial and temporal error covariance are quantified;
Score 6: Score 5+ the uncertainty estimates are validated using superior quality data sets (e.g., data set assessment activities).
Note: A very detailed description of uncertainty in a SST data set is provided at http://www.metoffice.gov.uk/hadobs/hadsst3/uncertainty.html, while for ERA-5 ERA5: uncertainty estimation

2.3.4 Automated Quality Monitoring

Automated quality monitoring is the monitoring of data quality while processing the data. Automated quality monitoring helps to assess major issues that may occur in a newly processed data record during the actual processing. It may lead to a stop and restart of processing activities if errors are detected. In that sense it can save significant resources in very large processing endeavors and is a clear sign of a mature processing system. Automatic quality monitoring can include steps such as; defining a metric, the procedures deployed, data used in comparisons, setting of thresholds for deviations, and checking the data against them to identify anomalies in the data record.

Table 10: The 6 maturity scores in sub-category Automated Quality Monitoring.

Score

Description

Suggested formulation

Mandatory terms

1

Not assigned

-

-

2

None

There is no automated quality monitoring documented for the dataset.

"no automated quality monitoring documented"

3

Methods for automated quality monitoring defined

There is a document defining the automated quality monitoring for the dataset.

"automated quality monitoring defined"

4

Score 3 + automated monitoring partially implemented

There is a document defining the automated quality monitoring for the dataset and it is stated that routines are partially implemented.

"automated quality monitoring defined", "routines partially implemented"

5

Score 3 + monitoring fully implemented (all production levels)

There is a document defining the automated quality monitoring for the dataset and it is stated that routines are fully implemented.

"automated quality monitoring defined", "routines fully implemented"

6

Score 5 + automated monitoring in place with results fed back to other accessible information, e.g. meta data or documentation

There is a document defining the automated quality monitoring for the dataset and it is stated that routines are fully implemented.
The results are reported in publicly available documents or in the metadata.

"automated quality monitoring defined", "routines fully implemented", "results are publicly available.

2.4 Public Access, Feedback and Update

This category contains four minor categories related to archiving and accessibility of the data record, how feedbacks from user communities are established and whether these feedbacks are used to update the data record.

2.4.1 Access and Archive

Access and archive evaluates the ease of distributing the data, documentation, and source code to users. It also checks the characteristics of the archive so that longer-term preservation is guaranteed. According to Long Term Data Preservation (https://earth.esa.int/eogateway/activities/gscb-and-ltdp/ltdp-introduction-and-objectives) guidelines an archive should keep more than one copy, use different media/technologies, and different locations. Public access means that the data are available without restrictions, but access may be subject to a fee. Data provider here means organizations such as space agencies, national meteorological centers or research institutes. An institutionalized data provision is considered to be more mature compared to the provision by an individual investigator.

Table 11: The 6 maturity scores in sub-category Access and Archive.

Score

Description

Suggested formulation

Mandatory terms

1

Data may be available through request to PI

The dataset is not publicly available without contacting the data provider and only under certain circumstances.

"dataset is not publicly available", " specific request needed"

2

Data available through PI

The dataset is not publicly available without contacting the data provider.

"dataset is not publicly available"

3

Data and documentation archived and available to the public from PI

The dataset is not publicly available without contacting the data provider. The different versions of  data including documentation is archived by the data provider.

"dataset is not publicly available", "versions of data with documentation archived"

4

Data and documentation archived and available to the public from Data Provider

The dataset is publicly available. The different versions of  data including documentation is archived by the data provider.

"dataset is publicly available", "versions of data with documentation archived"

5

Score 4 + source code archived by Data Provider

The dataset is publicly available. The different versions of  data including documentation and source code is archived by the data provider.

"dataset is publicly available", "versions of data with documentation and source code archived"

6

Score 5 + source code available to the public from Data Provider

The dataset and the respective source code is publicly available. The different versions of  data including documentation and source code is archived by the data provider.

"dataset and source code is publicly available", "versions of data with documentation and source code archived"


Additional information:
The assessment can be made as follows:
Score 1: The data record is not ready yet to be given to users and is not archived; it may be available to beta-users for testing. PI is still conducting initial validation of the data product;
Score 2: The data record is now ready to be given to users, but not archived yet.  Documentations are in draft form. Users can get the data by requesting is from the PI;
Score 3: The data record and documentation are readily available from the PI, e.g., on web pages;
Score 4: The data record and documentation are transferred from PI's to an institutionally maintained archive from which the data are accessible for users;
Score 5: The source code is also archived by the data provider, but not publicly available.
Score 6: The ultimate maturity is reached when the data record, documentation and the source code which has been used to produce the data record are archived, maintained and available to the public. See for example [RD.6] for the need of making codes public and peer-reviewed. It is not necessary to provide source code in a ready-to-use form, but making the code available in a PDF document to ensure transparency or providing the source in original form or in PDF form on demand should achieve Score 6.


2.4.2 Version Control

Version control is a measure taken to trace back the different versions of algorithms, software, format, input and ancillary data, and documentation used to generate the data record under consideration. It allows clear statements about when and why changes have been introduced. In this category we assess the versioning of documentation and data/metadata, as the source code storing system usually is not available to the assessor.

Table 12: The 6 maturity scores in sub-category Version control.

Score

Description

Suggested formulation

Mandatory terms

1

Not assigned

-

-

2

None

There is no information on version control available for the dataset.

"no information on version control"

3

Preliminary versioning of documentation, data and/or metadata

There is preliminary information on version control of documentation, data and/or metadata available for the dataset.

"preliminary information on version control in some aspects"

4

Versioning of documentation, data and/or metadata

There is full information on version control of documentation, data and/or metadata available for the dataset.

"full  information on version control in some aspects"

5

Version control of documentation, data and/or metadata institutionalized

There is full information on version control of documentation, data and/or metadata available for the dataset. The documented version control information is fully traceable from the files.

"full  information on version control in some aspects", "version control information is fully traceable"

6

Fully established version control of documentation, data and metadata considering all aspects

There is full information on version control of each documentation, data and metadata available for the dataset. The documented version control information is fully traceable from the files.

"full  information on version control", "version control information is fully traceable"


2.4.3 User Feedback

User feedback is important for developers and providers of data records to improve the quality and accessibility etc. of a data record. This category is to evaluate whether mechanisms are established to receive, analyze and use user feedback. Feedback can reach a data provider in many ways but needs to be organized when it systematically should be used to improve a data record and/or the service around it. In the scientific environment context e.g. data records are presented and discussed at Workshops and Conferences. A scientist may take messages back to his lab and start to think of and realize improvements if resources are available. A higher maturity for gathering feedback is obviously reached when a data record has been institutionalized and the responsible institute has established regular feedback processes which may start with a help desk and continue up to periodical Workshops where the feedback is gathered. Note: This category needs the assessor to search for feedback mechanisms also outside the CDS!

Table 13: The 6 maturity scores in sub-category User Feedback.

Score

Description

Suggested formulation

Mandatory terms

1

None

There is no information on the handling of feedback available for the dataset.

"no information on handling of feedback"

2

PI collects and evaluates feedback from scientific community

There is personalized reach-out by the data provider for collecting feedback for the dataset.

"personalized reach-out by the data provider for feedback"

3

PI and Data provider collect and evaluate feedback and from scientific community

There is a public reach-out/feedback form/contact point for collecting feedback for the dataset.

"public reach-out /form/contact point for feedback"

4

Data provider establishes feedback mechanism such as regular workshops, advisory groups, user help desk, etc. and utilizes feedback jointly with PI

There is a public reach-out/feedback form/contact point for collecting feedback for the dataset. There are regular events, groups, 2-way feedback mechanisms, etc. organized by the data provider.

"public reach-out /form/contact point for feedback",  "regular events, groups, 2-way feedback mechanisms, etc. organized"

5

Established feedback mechanism and international data quality assessment results are considered in periodic data record updates

There is a public reach-out/feedback form/contact point for collecting feedback for the dataset. There are regular events, groups, 2-way feedback mechanisms, etc. organized by the data provider. The established feedback fed back into data production is documented, including third party international data quality assessment results.

"public reach-out /form/contact point for feedback",  "regular events, groups, 2-way feedback mechanisms, etc. organized", "feedback fed back into data production", "third party international data quality assessment results"

6

Score 5 + Established feedback mechanism and international data quality assessment results are considered in continuous data provisions (Interim Climate Data Records)

There is a public reach-out/feedback form/contact point for collecting feedback for the dataset. There are regular events, groups, 2-way feedback mechanisms, etc. organized by the data provider. The established feedback fed back into data production is documented, including third party international data quality assessment results. There is immediate reaction on feedback with production of interim data products.

"public reach-out /form/contact point for feedback",  "regular events, groups, 2-way feedback mechanisms, etc. organized", "feedback fed back into data production", "third party international data quality assessment results" "production of interim data"


Additional information:
The assessment can be made as follows:
Score 1: The data record is not used by users yet, hence no feedback;
Score 2: Users are directly contacting PI to provide feedback or vice versa. This can be   only known by asking the PI directly to assess or by looking for conference contributions about the data record;
Score 3: An institutionalized data provider is supporting the Principal Investigator collecting user feedbacks, e.g., the data record was produced as part of a larger program and the agency organizing the program is also presenting the data record and is multiplying the feedback;
Score 4: The data provider has established feedback mechanisms. One can look for help desk support, announcement of annual workshops on a set of data records from one institution, etc.
Score 5: This will be reflected in user manual and other documentation on web pages, etc.;
Score 6: A sign of this is to check whether interim data records are provided (operational continuation of a climate data record employing the same procedures) and if feedback is also considered for this.

2.4.4 Updates to Record

Updates to record evaluates if data records are systematically updated or if this is done in a rather more ad hoc fashion. The latter is an indication that the update is dependent upon irregular funding and is not done by a bigger institution that provides the update as part of a wider service.

Table 14: The 6 maturity scores in sub-category Updates to Record.

Score

Description

Suggested formulation

Mandatory terms

1

Not assigned

-

-

2

None

There are no updates available for the dataset.

"no updates"

3

Irregularly by PI following scientific exchange and progress

There is no information on the updating schedule available for the dataset. There is at least one update to the data.

"no information on updating schedule", "update to the data"

4

Regularly by PI utilizing input from established feedback mechanism

There are regular updates available for the dataset, including improved methodology.

"regular updates", "including improved methodology"

5

Regularly operationally by data provider as dictated by availability of new input data or new methodology following user feedback

There are regular operational updates available for the dataset, depending on the availability of input data and including  improved methodology.

"regular operational updates", "availability of input data", "including improved methodology"

6

Score 5 + capability for fast improvements in continuous data provisions established (Interim Climate Data Records)

There are regular operational updates available for the dataset, depending on the availability of input data and including improved methodology.
There is an immediate production of interim data products.

"regular operational updates", "availability of input data", "including improved methodology", "immediate production of interim data products"


2.5 Usage

This category contains two minor categories related to the usage of products in research applications and for decision support systems. Under usage in decision support systems we understand the use in applications that directly support decisions, e.g., a NDVI product might be used as background map for clarifying insurance claims for cattle drovers in Africa or a solar irradiance map is directly used for infrastructure planning. In addition, all citations in reports such as the Intergovernmental Panel for Climate Change (IPCC) reports that support decisions and policy making on mitigation and adaptation are credible for the decision support section.
The two minor categories allow for a separate assessment of the usage of data records, i.e., the assessment result can state a high maturity for usage in research and a lower or no maturity for decision support systems. For the overall score it is important to know for which application the data record was created. This information shall come from Section 1 of the CORE-CLIMAX Data Record Description Form (see Appendix A of the CORE-CLIMAX manual). If the description is only pointing to use in research only that category shall be used to display the overall maturity for this category.

2.5.1 Research

Research applications of a data product can be evaluated by its appearance in publications and citations of such publications.

Table 15: The 6 maturity scores in sub-category Research.

Score

Description

Suggested formulation

Mandatory terms

1

None

There is no research available concerning the dataset.

"no research available"

2

Benefits for research applications identified
Note: There is usually some mentions of potential usage by the data provider!

There is no published research available concerning the dataset. The potential benefits for usage, thus, are highlighted.

"no published research available", "potential benefits"

3

Benefits for research applications demonstrated by publication

There [is at least one/are] peer reviewed research paper[s] available concerning the usability of the dataset.

"peer reviewed research paper available on usability"

4

Score 3 + Citations on product usage occurring

There [is at least one/are] peer reviewed research paper[s] available concerning the usage of the dataset. 

"peer reviewed research paper available on usage"

5

Score 4 + product becomes reference for certain applications

There [is at least one/are] peer reviewed research paper[s] available concerning the usage of the dataset as a reference in specific applications. 

"peer reviewed research paper available on usage as reference in specific applications"

6

Score 5 + Product and its applications becomes references in multiple research field

There [is at least one/are] peer reviewed research paper[s] available concerning the usage of the dataset as a reference in the research field[s] on [...].

"peer reviewed research paper available on usage as reference in research field"

Note: Please provide references with the defensible traces texts.
Additional information:
The assessment can be made as follows:
Score 1: The product is not used yet.
Score 2: An available research plan or similar document outlines usage in research applications;
Score 3: A peer reviewed publication exists that describes the usage of the product in a research application;
Score 4: The peer reviewed publication under score 3 is cited by peer reviewed publications of other applications;
Score 5: The product is used as reference in almost all peer reviewed publication for a specific application;
Score 6: The product is used as reference in almost all peer reviewed publication for applications in different research fields, e.g., climate modelling and climate system analysis

2.5.2 Decision Support System

As described above under usage for Decision Support System (DSS) any direct use in infrastructure planning or other business areas such as insurance and indirect support, e.g., through citations in IPCC reports, to decision and policy making in political context, e.g., the Europe 2020 growth strategy is accountable for this minor category.

Table 16: The 6 maturity scores in sub-category Decision Support System.

Score

Description

Suggested formulation

Mandatory terms

1

None

There is no described decision support system using the dataset.

"no described decision support system using dataset"

2

Potential benefits identified
Note: There are potentially some mentions by the data provider!

There is no described decision support system using the dataset. The potential usage in such a system is highlighted.

"no described decision support system using dataset", "potential usage"

3

Use occurring and benefits emerging

The dataset is used in decision support system[s] and benefits are emerging.

"used in decision support system[s] and benefits are emerging"

4

Score 3 + societal and economic benefits discussed

The dataset is used in decision support system[s] and benefits are emerging. Furthermore, the societal and/or economic benefits are discussed.

"used in decision support system[s] and benefits are emerging", "societal and/or economic benefits discussed"

5

Score 4 + societal and economic benefits demonstrated

The dataset is used in decision support system[s] and benefits are emerging. Furthermore, the societal and/or economic benefits are demonstrated.

"used in decision support system[s] and benefits are emerging", "societal and/or economic benefits demonstrated"

6

Score 5 + influence on decision (including policy) making demonstrated

The dataset is used in decision support system[s] and benefits are emerging. Furthermore, the societal and/or economic benefits are demonstrated and the dataset is influencing further societal and/or economic developments.

"used in decision support system[s] and benefits are emerging", "societal and/or economic benefits demonstrated", " influencing societal and/or economic developments"

Note: Please provide references with the defensible traces texts.
Additional information:
The assessment can be made, for example, as below in case of climate change mitigation and adaptation:
Score 1: The product is not used yet for this application;
Score 2: An available report suggesting that the product can be used for certain decision making applications;
Score 3: The product has been used in decision making applications. For example, used in studies for impact assessments and a report(s) is available (please provide evidence of this). This should be available from data provider with some evidence on the user side;
Score 4: The results of studies in Score 3 are used for mitigation or adaptation planning.  For example, a state or national government report on the planning is available which cites the study using the data set;
Score 5: The results of studies in Score 3 are used in mitigation or adaptation and resulted in societal and economic benefits;
Score 6: Used in for example in national and international climate policy making, for example, Kyoto Protocol.
Note: One can also point to the use of a data record in other applications which has economic benefits such as use by an insurance company for decision making or use in a climate service, e.g., the major application areas mentioned in the WMO Global Framework of Climate Services (agriculture and food security, disaster risk reduction, health and water).

References

European Organization for the Exploitation of Meteorological Satellites (EUMETSAT) (2014).  CORE-CLIMAX System Maturity Matrix Instruction Manual. CC/EUM/MAN/13/002 (v4). Darmstadt, 34pp.

National Research Council (2007). Environmental data management at NOAA: Archiving, stewardship, and access. 130. The National Academies Press. Washington, D.C. DOI: https://doi.org/10.17226/12017 [Available online at: https://www.nap.edu/catalog/12017.html]

Peng GA, Milan A, Ritchey NA, Partree RP and others (2019). Practice Paper: Practical Application of a Data Stewardship Maturity Matrix for the NOAA OneStop Project. Data Science Journal, 18: 41, pp. 1–18. DOI: https://doi.org/10.5334/dsj-2019-041

Schulz J, John V, Kaiser-Weiss A, Merchant C, Tan D, Swinnen E, and Roebeling R. (2017) European climate data record capacity assessment, Geoscience Data Journal, in preparation.

Su Z, Timmermans, W J, Zeng Y, Schulz J, John VO and others (2018). An overview of European efforts in generating climate data records. Bulletin of the American Meteorological Society99(2), 349-359. DOI: 10.1175/BAMS-D-16-0074.1

WMO SMM-CD Working Group (2019).   Guidance Booklet: WMO Stewardship Maturity Matrix for Climate Data (SMM-CD). URL; https://figshare.com/articles/The_manual_for_the_WMO-Wide_Stewardship_Maturity_Matrix_for_Climate_Data/7002482 (last accessed 20th February 2020).


Addendum

Specific notes and where to find further information on these various categories can be found in the CORE-CLIMAX SMM Instruction Manual. Further information on locating that additional information (page details) relating to the various categories and sub-categories in the CORE-CLIMAX manual are supplied as a tabulated summary (Addendum Table A1).

Table A1.  Summary guide: Accessing the CORE-CLIMAX SMM support guidance notes.  (Numbers according to CORE-CLIMAX sections. Sections that are not assessed are shown in red.)

IM section and notes

Sub-section and topic

Pages 

4.1 Software Readiness

4.1.1 Coding Standards

10


4.1.2 Software Documentation

11


4.1.3 Portability and Numerical Reproducibility

12


4.1.4 Security

13 

4.2 Metadata

4.2.1 Standards

14


4.2.2 Collection Level

15


4.2.3 File Level

16

4.3 User documentation

4.3.1 Formal Description of Scientific Methodology

17


4.3.2 Formal Validation Report

18


4.3.3 Formal Product User Guide (PUG)

19


4.3.4 Formal Description of Operations Concept

20

4.4 Uncertainty Characterization

4.4.1 Standards

21


4.4.2 Validation

23


4.4.3 Uncertainty Quantification

24


4.4.4 Automated Quality Monitoring

25

4.5 Public Access, Feedback and Update

4.5.1 Access and Archive

26


4.5.2 Version Control

27


4.5.3 User Feedback

28


4.5.4 Updates to Record

29

4.6 Usage

4.6.1 Research

30-31


4.6.2 Decision Support System

31


We want to thank the CORE-CLIMAX consortium for the approval to use and adjust their SMM framework.

Appendix


Figure B1: Maturity Matrix example from the ESMValTool output (for potential filling of colors by hand).


This document has been produced in the context of the Copernicus Climate Change Service (C3S).

The activities leading to these results have been contracted by the European Centre for Medium-Range Weather Forecasts, operator of C3S on behalf of the European Union (Delegation Agreement signed on 11/11/2014 and Contribution Agreement signed on 22/07/2021). All information in this document is provided "as is" and no guarantee or warranty is given that the information is fit for any particular purpose.

The users thereof use the information at their sole risk and liability. For the avoidance of all doubt , the European Commission and the European Centre for Medium - Range Weather Forecasts have no liability in respect of this document, which is merely representing the author's view.