Background on EFAS medium-range forecasting

The forecast quality of EFAS medium-range forecasts (out to 10 day lead time) has been evaluated in order to provide additional information to users to aid decision-making. The forecast evaluation method is described in Harrigan et al. (2020) and results summarised in the headline score "medium-range forecast skill" layer on the EFAS web map viewer: EFAS medium-range forecast skill product.

Headline medium-range forecast skill score 

The headline medium-range ensemble forecast skill score is the maximum lead time (in days), up to 10-days ahead, in which the Continuous Ranked Probability Skill Score (CRPSS) is greater than a value of 0.5, when compared to a simple persistence benchmark forecast using EFAS historical Forced simulation (sfo) as proxy observations. Forecast skill is calculated using river discharge reforecasts for a set of past dates, based on a configuration as close as possible to the operational setting. ECMWF-ENS medium and extended range reforecasts are used and are run twice per week for the past 20-years with 11 ensemble members.

Scores are shown on the EFAS map viewer in the 'Medium-range forecast skill' layer under the 'Evaluation' menu (an example of the layer is shown here: EFAS medium-range forecast skill product). For each fixed reporting point, the maximum lead time the CRPSS is greater than 0.5 is given, with stations with darker purple circles having high skill for at longer lead times. The category "0-1", marked as light pink circles represents stations that have forecast skill lower than the 0.5 threshold for any lead time or less than one day. Note: This does not mean that a station has no skill. Only when the CRPSS ≤ 0 is when the forecast has no skill, when compared to a persistence benchmark forecast. 

Method

Reforecasts 

ECMWF-ENS medium and extended range reforecasts are generated every Monday and Thursday, for the same date in the past 20 years for 11 ensemble members out to a lead time of 10 days. For the EFAS v5 forecast skill evaluation, reforecasts are from version 48r1 of the ECMWF Integrated Forecast System (IFS) implemented on 27 June 2023 with a horizontal resolution of ~9km for ensemble meteorological outputs. At the time of release of EFAS v5 on 20 September 2023, reforecasts generated from ECMWF 48r1 for start dates across April to September 2023 were available. These were then forced through the EFAS hydrological modelling chain to produce 20 years of river discharge reforecasts, twice weekly, for 11 ensemble members for start dates across April to September. Reforecasts from the ECMWF IFS 48r1 will continue to be generated operationally each Monday and Thursday for the corresponding dates in the last 20 years (i.e. 2003-2022). EFAS v5 reforecasts will therefore only include start dates from part of the calendar year (i.e. April to September), and so results will not reflect forecast skill in other parts of the year. The forecast skill layer will be updated on a quarterly basis when newer reforecast data arrive, e.g. in November 2023, there will be reforecasts available from April to October, and so forth. 

Benchmark forecast

A widely used benchmark forecast for short to medium-range forecast skill evaluation is a hydrological persistence forecast (Alfieri et al., 2014; Pappenberger et al., 2015). Here, the 6 hr river discharge value of the EFAS historical Forced simulation (sfo) from the time-step previous to reforecast initilisation is used for all lead-time out to 10-days ahead. For example, for the reforecast initialised on 00UTC 3 January 1999, the mean 6hr river discharge value from 18UTC 2 January 1999 to 00UTC 3 January 1999 is extracted from EFAS historical Forced simulation (sfo) and persisted for all lead times. 

Proxy observations

Forecast skill is evaluated against EFAS historical Forced simulation (sfo), as a proxy to river discharge observations, for fixed reporting point stations across the EFAS domain. The advantage of using sfo instead of in situ river discharge observations is that the forecast skill can be determined independently from the hydrological model error and having a complete spatial and temporal coverage, so that forecast skill can be determined across the full EFAS domain. Users must be aware that the key assumption with the proxy observation approach is that the EFAS hydrological model performance, in which sfo is based, is reasonably good for the station of interest. If the hydrological model performance is poor, then particular care must be made in interpreting forecast skill scores. 

Skill score

The ensemble forecast performance is evaluated using the Continuous Ranked Probability Score (CRPS) (Hersbach, 2000), one of the most widely used headline scores for probabilistic forecasts. The CRPS compares the continuous cumulative distribution of an ensemble forecast with the distribution of the observations. It has an optimum value of 0 and measures the error in the same units as the variable of interest (here river discharge in m3 s-1). It collapses to the mean absolute error for deterministic forecasts (as is the case here for the single-valued persistence benchmark forecast). The CRPS is expressed as a skill score to calculate forecast skill, CRPSS, which measures the improvement over a benchmark forecast and is given in:

\[ {CRPSS}={1-}\frac{{CRPS_{fc}}}{{CRPS_{bench}}} \]

A CRPSS value of 1 indicates a perfect forecast, CRPSS > 0 shows forecasts more skilful than the benchmark, CRPSS = 0 shows forecasts are only as accurate as the benchmark, and a CRPSS < 0 warns that forecasts are less skilful than the benchmark forecast. The headline EFAS medium-range forecast skill score uses a CRPSS threshold of 0.5 in the summary layer in the EFAS web map viewer, this can be interpreted as the EFAS forecast has 50% less error than the benchmark forecast.

The CRPSS is calculated with EFAS medium-range reforecasts against a single-valued persistence benchmark forecasts and verified against EFAS sfo river discharge simulations as proxy observations. CRPSS headline scores are then mapped on the EFAS map viewer, and CRPSS and CRPS time-series plots are produced for each fixed reporting point station.

References 

Alfieri, L., Pappenberger, F., Wetterhall, F., Haiden, T., Richardson, D., Salamon, P., 2014. Evaluation of ensemble streamflow predictions in Europe. Journal of Hydrology 517, 913–922.

Hersbach, H., 2000. Decomposition of the Continuous Ranked Probability Score for Ensemble Prediction Systems. Wea. Forecasting 15, 559–570.

Pappenberger, F., Ramos, M.H., Cloke, H.L., Wetterhall, F., Alfieri, L., Bogner, K., Mueller, A., Salamon, P., 2015. How do I know if my forecasts are better? Using benchmarks in hydrological ensemble prediction. Journal of Hydrology 522, 697–713.