Page tree
Skip to end of metadata
Go to start of metadata

Question:

Why CAMS global AOD evaluation against AERONET observations differ slightly between the operational validation charts  and what's in the quarterly evaluation and quarterly control report?


Answer

Currently two evaluations are done of CAMS AOD against AERONET and available for users on the CAMS website. An operational validation by ECMWF: https://atmosphere.copernicus.eu/charts/ and a 3 monthly delayed NRT evaluation in CAMS84, MetNo: Aerocom web interface.

The comparison of AERONET AOD to the CAMS-o-suite is done by different methodologies at MetNo and ECMWF, which are both valid, but lead to slightly different results. Version 3 should be implemented at ECMWF also for the public validation server. As an illustration statistics visualisation for August 2018 from the CAMS verification websites are shown below. Other months show similar results, and do not change the basic conclusions.


The visualisation is different. ECMWF shows a time series of global average bias per day, while MetNo shows monthly aggregated statistics, associated with scatter plots showing all daily data.

The graphs show for August 2018 a negligible o-suite bias by workup of MetNo (NMB -0.1%) and on average a slightly negative bias in ECMWF (not quantified as average on the graph).

MetNo uses the data version AERONET version 3 level 1.5, ECMWF version 2, level 1.5. which they download regularly from NASAs AERONET website. ECWMF downloading daily, while MetNo is downloading circa 3 monthly. The latter method may increase the data volume slightly, because of delays in daily submission of AERONET station operators. In August 2018, 379 stations are used by MetNo, while ECMWF used 365 sites.

Mountain sites, like Mauna Loa, normally create a positive bias for the model, since the observations miss high boundary layer aerosol loads. The number of actually used sites by MetNo is thus only 321. A comparison by MetNo indicates that including mountain sites will give a positive 3% bias by the model. ECMWF is including all mountain sites, but still has a more negative across network bias.

Voronoi weighting is applied by ECMWF to give more weight to sites in areas where few measurements are made. By giving more weight to remote sites, the network average go down. Our earlier evaluation in 2014 showed that global averages across the AERONET network can differ by 8%. It is not clear whether the weighting influences the bias and thus could explain the (small) differences between ECMWF and MetNos workup.

Earlier work at MetNo for AERONET level 2 NRT data processing, avoiding cloud contamination, is now explaining bias differences between the two methods. AERONET version 3 - used by MetNo - has an efficient automatic NRT cloud screening implemented. AERONET version 2, used by ECMWF is still prone to cloud contamination, making the AERONET values 20% higher on average, and the model bias negative, as seen for August 2018.

ECMWF is evaluating AOD @ 500nm while MetNo is evaluating @ 550 nm. Any impact on AOD bias differences is probably small, however not quantified.

Daily averaging of AERONET data is done differently at ECMWF and MetNo, creating network average differences of (based on work in 2014). MetNos averaging creates a 4% smaller observed network average in AOD. This translates in a corresponding more positive bias for the model in MetNos evaluation because daily means are used from the model.