Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Confirmed.

Page info
infoTypeModified date
prefixLast modified on
typeFlat

Table of Contents
maxLevel5

Easy Heading Macro

Info

At ECMWF the default main supported file format for data distribution and the only supported format is the most meteorological data is GRIB format. Data can be accessed Some ECMWF data are also available in NetCDF format, but NetCDF is not formally supported by ECMWF.

For this reason, before downloading data, please check to see if your processing software supports the supports  GRIB format. If it does, use the GRIB format. If your software supports only NetCDF, use the NetCDF format.

Basics

NetCDF (Network Common Data Form) is a set of software libraries and self-describing, machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data. NetCDF is commonly used to store and distribute scientific data. The NetCDF software was developed at the Unidata Program Center in Boulder, Colorado, USA (Unidata NetCDF Factsheet; Also see Wikipedia article). NetCDF files usually have the extension .nc.

An agreement about how to name and describe the data in a NetCDF file is a NetCDF Convention. The most widely used is the CF (Climate and Forecast Metadata) Convention (see CF full definition).

NetCDF files which are CF compliant can be interpreted by the widest range of software tools to read, process and visualise the data (e.g. Metview, NCView, Xconv).

To read NetCDF files there are tools with a graphical interface like Matlab, IDL, ArcGIS, NCView, Xconv and developer (programming) tools like the Unidata NetCDF4 module for Python and Xarray. Please see your preferred tools' documentation for further information regarding NetCDF support.

For climate and forecast data stored in NetCDF format there are several (non-mandatory) metadata conventions which can be used (such as the CF Convention). CF compliant metadata in NetCDF files can be accessed by several tools, including Metview, NCView, Xconv .

The latest version of the NetCDF format is NetCDF 4 (aka NetCDF enhanced, introduced in 2008), but NetCDF 3 (NetCDF classic) is also still used.

NetCDF files can be converted to ASCII or text see the following link for more details: How to convert NetCDF to CSV , although please be aware that this can lead to large amounts of output.

Writing your own NetCDF decoder or encoder

Whenever possible, it is advised that users make use of an existing software tool to read and write NetCDF files.

To decode NetCDF files To decode NetCDF files, there is an official NetCDF Application Programming Interface (API) with interfaces in Fortran, C and , C++, and Java available from Unidata. Unidata NetCDF The API also comes with some useful command-line tools (e.g. ncdump -h file.nc which gives a nice summary of file contents - see ncdump guide).

There are ways also to convert a NetCDF file to ASCII or text (e.g. netcdf4excel).

For writing NetCDF files, please check through Unidata 6 Best Practices (6.8 Packed Data Values and 6.9 Missing Data Values are of particular interest).

NetCDF version 3 format is widely used. In 2008, an enhanced version, NetCDF4, was introduced.

Scale_factor and Add_offset

The Scale_factor and Add_offset attributes in NetCDF files are a mechanism to reduce the storage space needed for NetCDF files, so essentially a data packing mechanism.

"scale_factor" and "add_offset"

In order to reduce the storage space needed for NetCDF files, the data for a given variable can be 'packed', with the variable attributes "scale_factor" and "add_offset" being used to describe this packing.

Often for ECMWF NetCDF files, the data values have been packed into short integers (16 bits or NC_SHORT) to save space. Each netCDF variable that has been packed in this way has a 'scale_factor' and 'add_offset' attributes associated with it.

We strongly advise that if you have any doubts about using packed NetCDF files, that you unpack them first, as described below.

An existing software package such as 'ncpdq' from the NCO toolkit can be used to create an unpacked version of the file:

http://nco.sourceforge.net/nco.html

in particular:

http://nco.sourceforge.net/nco.html#ncpdq

Code Block
titleUnpack all variables in file in.nc and store the results in out.nc:
ncpdq -U in.nc out.nc


Please note that when reading and writing NetCDF files using When reading and writing NetCDF files software applications compliant with Unidata specifications , these should deal with Scale"scale_factor" and Add"add_offset" automatically, making unpacking (read) and packing (write) completely transparent to the user. The This means that the user always sees the unpacked data values and doesn't have to do anything to deal with Scale"scale_factor" and Add"add_offset. The " directly.

However, please be aware that these factors can be the source of some confusion as the software application might display the values of Scale"scale_factor" and Add"add_offset" for reference, similar to a ZIP compression software displaying the compression factor.


For example in :

  • Matlab (ncread, ncwrite) applies "scale_factor" and "add_offset" automatically
  • R (ncvar_get) applies "scale_factor" and "add_offset" automatically
  • Panoply applies "scale_factor" and "add_offset" automatically. It also displays the
Unidata NetCDF4 library for Python  work like this.
  • values of "scale_factor" and "add_offset", causing many users to believe they have to calculate something - no, you don't.
  • Metview from version 5 onwards applies "scale_factor" and "add_offset" automatically; Metview 4.x does not.
  • The Unidata NetCDF4 module for Python (which is an interface to the NetCDF C library) applies the values of "scale_factor" and "add_offset" automatically

The above is how application software should be implemented, i.e. to show unpacked data values.

Some application software applications might be implemented differently and display the packed data values. In this case the user has to recalculate calculate the unpacked values using Scalescale_factor and Add_offset.add_offset, using these formulae:

  • unpacked_data_value = (packed_data_value * scale_factor) + add_offset
  • packed_data_value = nint((unpacked_data_value - add_offset) / scale_factor)

In any case we strongly recommend you check your processing software's documentation on how it deals with Scale "scale_factor" and Add_offset."add_offset", and if at all possible, unpack the data as described above before using the files.

Info
iconfalse

This document has been produced in the context of the Copernicus Atmosphere Monitoring Service (CAMS) and Copernicus Climate Change Service (C3S).

The activities leading to these results have been contracted by the European Centre for Medium-Range Weather Forecasts, operator of CAMS and C3S on behalf of the European Union (Delegation Agreement signed on 11/11/2014 and Contribution Agreement signed on 22/07/2021). All information in this document is provided "as is" and no guarantee or warranty is given that the information is fit for any particular purpose.

The users thereof use the information at their sole risk and liability. For the avoidance of all doubt , the European Commission and the European Centre for Medium - Range Weather Forecasts have no liability in respect of this document, which is merely representing the author's view.

Content by Label
showLabelsfalse
max5
spacesCKB
showSpacefalse
sorttitle
typepage
cqllabel in ("download","cams","c3s","data","netcdf") and type = "page" and space = "CKB"
labelscams c3s data download

Page properties
hiddentrue


Related issues