At ECMWF the main supported file format for most meteorological data is GRIB format. Some ECMWF data are also available in NetCDF format, but NetCDF is not formally supported by ECMWF.

For this reason, before downloading data, please check to see if your processing software supports  GRIB format. If it does, use the GRIB format. If your software supports only NetCDF, use the NetCDF format.

Basics

NetCDF (Network Common Data Form) is a set of software libraries and self-describing, machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data. NetCDF is commonly used to store and distribute scientific data. The NetCDF software was developed at the Unidata Program Center in Boulder, Colorado, USA (Unidata NetCDF Factsheet; Also see Wikipedia article). NetCDF files usually have the extension .nc.

To read NetCDF files there are tools with a graphical interface like Matlab, IDL, ArcGIS, NCView, Xconv and developer (programming) tools like the Unidata NetCDF4 module for Python and Xarray. Please see your preferred tools' documentation for further information regarding NetCDF support.

For climate and forecast data stored in NetCDF format there are several (non-mandatory) metadata conventions which can be used (such as the CF Convention). CF compliant metadata in NetCDF files can be accessed by several tools, including Metview, NCView, Xconv .

The latest version of the NetCDF format is NetCDF 4 (aka NetCDF enhanced, introduced in 2008), but NetCDF 3 (NetCDF classic) is also still used.

NetCDF files can be converted to ASCII or text see the following link for more details: How to convert NetCDF to CSV , although please be aware that this can lead to large amounts of output.

Writing your own NetCDF decoder or encoder

Whenever possible, it is advised that users make use of an existing software tool to read and write NetCDF files.

To decode NetCDF files there is an official NetCDF Application Programming Interface (API) with interfaces in Fortran, C, C++, and Java available from Unidata. The API also comes with some useful command-line tools (e.g. ncdump -h file.nc which gives a nice summary of file contents - see ncdump guide).

For writing NetCDF files, please check through Unidata 6 Best Practices (6.8 Packed Data Values and 6.9 Missing Data Values are of particular interest).

"scale_factor" and "add_offset"

In order to reduce the storage space needed for NetCDF files, the data for a given variable can be 'packed', with the variable attributes "scale_factor" and "add_offset" being used to describe this packing.

Often for ECMWF NetCDF files, the data values have been packed into short integers (16 bits or NC_SHORT) to save space. Each netCDF variable that has been packed in this way has a 'scale_factor' and 'add_offset' attributes associated with it.

We strongly advise that if you have any doubts about using packed NetCDF files, that you unpack them first, as described below.

An existing software package such as 'ncpdq' from the NCO toolkit can be used to create an unpacked version of the file:

http://nco.sourceforge.net/nco.html

in particular:

http://nco.sourceforge.net/nco.html#ncpdq

Unpack all variables in file in.nc and store the results in out.nc:
ncpdq -U in.nc out.nc


Please note that when reading and writing NetCDF files using software applications compliant with Unidata specifications , these should deal with "scale_factor" and "add_offset" automatically, making unpacking (read) and packing (write) completely transparent to the user. This means that the user always sees the unpacked data values and doesn't have to deal with "scale_factor" and "add_offset" directly.

However, please be aware that these factors can be the source of some confusion as the software application might display the values of "scale_factor" and "add_offset" for reference, similar to a ZIP compression software displaying the compression factor.


For example:

  • Matlab (ncread, ncwrite) applies "scale_factor" and "add_offset" automatically
  • R (ncvar_get) applies "scale_factor" and "add_offset" automatically
  • Panoply applies "scale_factor" and "add_offset" automatically. It also displays the values of "scale_factor" and "add_offset", causing many users to believe they have to calculate something - no, you don't.
  • Metview from version 5 onwards applies "scale_factor" and "add_offset" automatically; Metview 4.x does not.
  • The Unidata NetCDF4 module for Python (which is an interface to the NetCDF C library) applies the values of "scale_factor" and "add_offset" automatically

The above is how application software should be implemented, i.e. to show unpacked data values.

Some software applications might be implemented differently and display the packed data values. In this case the user has to calculate the unpacked values using scale_factor and add_offset, using these formulae:

  • unpacked_data_value = (packed_data_value * scale_factor) + add_offset
  • packed_data_value = nint((unpacked_data_value - add_offset) / scale_factor)

In any case we strongly recommend you check your processing software's documentation on how it deals with "scale_factor" and "add_offset", and if at all possible, unpack the data as described above before using the files.

This document has been produced in the context of the Copernicus Atmosphere Monitoring Service (CAMS) and the Copernicus Climate Change Service (C3S).

The activities leading to these results have been contracted by the European Centre for Medium-Range Weather Forecasts, operator of CAMS and C3S on behalf of the European Union (Delegation Agreement signed on 11/11/2014). All information in this document is provided "as is" and no guarantee or warranty is given that the information is fit for any particular purpose.

The users thereof use the information at their sole risk and liability. For the avoidance of all doubt, the European Commission and the European Centre for Medium-Range Weather Forecasts have no liability in respect of this document, which is merely representing the author's view.