Background on CEMS-Flood medium/ extended-range forecasting

The forecast quality of CEMS-Flood medium/extended-range forecasts (out to 10 day lead time for EFAS, and 30 days for GloFAS) has been evaluated in order to provide additional information to users to aid decision-making. The forecast evaluation method is described in Harrigan et al. (2023) and summarised below with the results provided in each new model version description (for EFAS: EFAS operational system and for GloFAS: GloFAS operational system) as well as summarised as a new headline score included as a "forecast skill" layer on the CEMS-Flood web map viewers: EFAS medium-range forecast skill product and GloFAS forecast skill product.

Headline medium/extended-range forecast skill score 

The headline medium/extended-range ensemble forecast skill score is the maximum lead time (in days), up to 10-days ahead, in which the Continuous Ranked Probability Skill Score (CRPSS) is greater than a value of 0.5, when compared to a simple persistence benchmark forecast using CEMS-Flood historical Forced simulation (sfo) as proxy observations. Forecast skill is calculated using river discharge reforecasts for a set of past dates, based on a configuration as close as possible to the operational setting. ECMWF-ENS medium and extended range reforecasts are used and are run twice per week for the past 20-years with 11 ensemble members.

Scores are shown on the CEMS-Flood Information systems (for EFAS: EFAS map viewer ; for GloFAS: GloFAS map viewer) in the 'Medium-range / forecast skill' layer under the 'Evaluation' menu (an example of the layer is shown here for EFAS: EFAS medium-range forecast skill product). For each fixed reporting points, the maximum lead time the CRPSS is greater than 0.5 is given, with stations with darker purple circles having high skill for at longer lead times. The category "0-1", marked as light pink circles represents stations that have forecast skill lower than the 0.5 threshold for any lead time or less than one day. Note: This does not mean that a station has no skill. Only when the CRPSS ≤ 0 is when the forecast has no skill, when compared to a persistence benchmark forecast. 

Method

Reforecasts 

CEMS-Flood hydrological reforecasts for medium to extended range are generated using the CEMS-Flood hydrologicla modelling chain forced by the ECMWF-ENS medium and extended range reforecasts., which are generated every Monday and Thursday, for the same date in the past 20 years for 11 ensemble members out to a lead time of 46 days. From EFAS v5 and GloFAS v4, they are generated everyday out of a lead time of 10 days (9-km resolution from EFAS v5 and GloFAS v4, 18-km resolution for earlier versions) and 46 days (36-km resolution).

Benchmark forecast

EFAS

A widely used benchmark forecast for short to medium-range forecast skill evaluation is a hydrological persistence forecast (Alfieri et al., 2014; Pappenberger et al., 2015). Here, the 6 hr river discharge value of the EFAS historical Forced simulation (sfo) from the time-step previous to reforecast initilisation is used for all lead-time out to 10-days ahead for EFAS. For example, for the reforecast initialised on 00UTC 3 January 1999, the mean 6hr river discharge value from 18UTC 2 January 1999 to 00UTC 3 January 1999 is extracted from EFAS historical Forced simulation (sfo) and persisted for all lead times. 

GloFAS

Following Pappenberger et al. (2015) and because GloFAS produces seamless forecasts across short, medium and extended lead times (day 1 to 30), two benchmarks are considered here, each calculated for all GloFAS diagnostic river points: persistence, typically used for short lead times where the forecast signal is dominated by serial correlation of river discharge, and climatology, typically used for longer lead times where the forecast signal is dominated by the seasonality of river discharge defined as follows:

  • persistence benchmark forecast defined as the single GloFAS-ERA5 daily river discharge of the day preceding the reforecast start date. The same river discharge value is used for all lead times.
  • climatology benchmark forecast based on a 40-year climatological sample (1979-2018) of moving 31-day windows of GloFAS-ERA5 river discharge reanalysis values, centred on the date being evaluated (+- 15 days). From each 1240-valued climatological sample (i.e. 40 years 31-day window), 11 fixed quantiles (Qn) at 10 % intervals were extracted (Q0, Q10, Q20, , Q80, Q90, Q100). The fixed quantile climate distribution used therefore varies by lead time, capturing the temporal variability in local river discharge climatology.

Proxy observations

Forecast skill is evaluated against CEMS-Flood historical Forced simulation (sfo)/ reanalysis, as a proxy to river discharge observations, for all fixed reporting point stations across the CEMS-Flood domain. The advantage of using sfo/reanalysis instead of in situ river discharge observations is that the forecast skill can be determined independently from the hydrological model error and having a complete spatial and temporal coverage, so that forecast skill can be determined across the full spatial domain. Users must be aware that the key assumption with the proxy observation approach is that the CEMS-Flood hydrological model performance, in which sfo /reanalysis is based, is reasonably good for the station of interest. If the hydrological model performance is poor, then particular care must be made in interpreting forecast skill scores. 

Skill score

The ensemble forecast performance is evaluated using the Continuous Ranked Probability Score (CRPS) (Hersbach, 2000), one of the most widely used headline scores for probabilistic forecasts. The CRPS compares the continuous cumulative distribution of an ensemble forecast with the distribution of the observations. It has an optimum value of 0 and measures the error in the same units as the variable of interest (here river discharge in m3 s-1). It collapses to the mean absolute error for deterministic forecasts (as is the case here for the single-valued persistence benchmark forecast). The CRPS is expressed as a skill score to calculate forecast skill, CRPSS, which measures the improvement over a benchmark forecast and is given in:

\[ {CRPSS}={1-}\frac{{CRPS_{fc}}}{{CRPS_{bench}}} \]

A CRPSS value of 1 indicates a perfect forecast, CRPSS > 0 shows forecasts more skilful than the benchmark, CRPSS = 0 shows forecasts are only as accurate as the benchmark, and a CRPSS < 0 warns that forecasts are less skilful than the benchmark forecast. The headline EFAS medium-range forecast skill score uses a CRPSS threshold of 0.5 in the summary layer in the EFAS web map viewer, this can be interpreted as the EFAS forecast has 50% less error than the benchmark forecast.

For EFAS, he CRPSS is calculated with EFAS medium-range reforecasts against a single-valued persistence benchmark forecasts and verified against EFAS sfo river discharge simulations as proxy observations. CRPSS headline scores are then mapped on the EFAS map viewer, and CRPSS and CRPS time-series plots are produced for each fixed reporting point station. For GloFAS, the CRPSS is calculated with GloFAS reforecasts against both persistence and climatology benchmark forecasts and verified against GloFAS-ERA5 river discharge reanalysis as proxy observations. CRPSS headline scores are then mapped on the GloFAS map viewer, and CRPSS and CRPS time-series plots are produced for each fixed reporting point station.

References 

Alfieri, L., Pappenberger, F., Wetterhall, F., Haiden, T., Richardson, D., Salamon, P., 2014. Evaluation of ensemble streamflow predictions in Europe. Journal of Hydrology 517, 913–922.

Harrigan, S., Zsoter, E., Cloke, H., Salamon, P., and Prudhomme, C.: Daily ensemble river discharge reforecasts and real-time forecasts from the operational Global Flood Awareness System, Hydrol. Earth Syst. Sci., 27, 1–19, https://doi.org/10.5194/hess-27-1-2023, 2023.

Hersbach, H., 2000. Decomposition of the Continuous Ranked Probability Score for Ensemble Prediction Systems. Wea. Forecasting 15, 559–570.

Pappenberger, F., Ramos, M.H., Cloke, H.L., Wetterhall, F., Alfieri, L., Bogner, K., Mueller, A., Salamon, P., 2015. How do I know if my forecasts are better? Using benchmarks in hydrological ensemble prediction. Journal of Hydrology 522, 697–713.