Selection of stations

GloFAS seasonal forecast skill assessment is conducted on all GloFAS diagnostic points.

Background on GloFAS Seasonal forecasting

The forecast quality of GloFAS Seasonal weekly river discharge forecasts (out to 16 week lead time) has been evaluated in order to provide additional information to users to aid decision-making. The forecast evaluation method is consistent with that of the GloFAS 30-day method (GloFAS forecast skill) and summarised below with the results provided in the following page: GloFAS Seasonal v3.1 forecast skill as well as summarised as a new headline score included as a "Forecast skill" layer on the GloFAS web map viewer: GloFAS Seasonal forecast skill product.

Headline seasonal forecast skill score 

The headline seasonal forecast skill score is the maximum lead time (in weeks), up to 16-weeks ahead, in which the Continuous Ranked Probability Skill Score (CRPSS) is greater than a value of 0.5, when compared to a climatology benchmark forecast using GloFAS-ERA5 historical river discharge reanalysis (also known as Forced simulation (sfo) within CEMS) as proxy observations (Harrigan et al., 2020). Forecast skill is calculated using river discharge reforecasts for a set of past dates, based on a configuration as close as possible to the operational setting.  ECMWF SEAS5 reforecasts are used and are run once per month for the 36-year period 1981-2016 with 25 ensemble members.

Scores are shown on the GloFAS map viewer in the 'Seasonal forecast skill' layer under the 'Evaluation' menu (an example of the layer is shown here: GloFAS Seasonal forecast skill product). For each GloFAS web reporting point, the maximum lead time the CRPSS is greater than 0.5 is given, with stations with purple squares having high skill for at longer lead times. The category "0", marked as light pink squares represents stations that have forecast skill lower than the 0.5 threshold for any lead time. Note: This does not mean that a station has no skill. Only when the CRPSS ≤ 0 is when the forecast has no skill, when compared to a climatology benchmark forecast. 



ECMWF SEAS5 reforecasts are generated on the first of each month for the 36-year period 1981-2016 for 25 ensemble members and used out to a lead time of 16 weeks. These are then forced through the GloFAS Seasonal hydrological modelling chain to produce 36 years of river discharge reforecasts, once per month, for 25 ensemble members at a weekly time-step.

Benchmark forecasts

Following Pappenberger et al. (2015) a climatology benchmark forecast typically used for seasonal lead times where the forecast signal is dominated by the seasonality of river discharge and is defined as follows:

  • climatology benchmark forecast based on a 40-year climatological sample (1979-2018) of moving 4-weekly windows of GloFAS-ERA5 river discharge reanalysis values, centred on the date being evaluated (+- 15 days). From each climatological sample, 11 fixed quantiles (Qn) at 10 % intervals were extracted (Q0, Q10, Q20, , Q80, Q90, Q100). The fixed quantile climate distribution used therefore varies by time of year, capturing the temporal variability in local river discharge climatology.

Proxy observations

Seasonal forecast skill is evaluated against GloFAS-ERA5 river discharge reanalysis, as a proxy to river discharge observations, for diagnostic points across the GloFAS domain. The advantages of using reanalysis instead of in situ river discharge observations is that the forecast skill can be determined independently from the hydrological model error and having a complete spatial and temporal coverage, so that forecast skill can be determined across the full GloFAS domain. Users must be aware that the key assumption with the proxy observation approach is that the GloFAS-ERA5 hydrological performance is reasonably good for the station of interest. If the hydrological model performance is poor, then particular care must be made in interpreting forecast skill scores. Full assessment of the hydrological performance of GloFAS-ERA5 against a global network of observations can be found in Harrigan et al. (2020). 

Skill score

The ensemble forecast performance is evaluated using the Continuous Ranked Probability Score (CRPS) (Hersbach, 2000), one of the most widely used headline scores for probabilistic forecasts. The CRPS compares the continuous cumulative distribution of an ensemble forecast with the distribution of the observations . It has an optimum value of 0 and measures the error in the same units as the variable of interest (here river discharge in m3 s-1). It collapses to the mean absolute error for deterministic forecasts (as is the case here for the single-valued persistence benchmark forecast). The CRPS is expressed as a skill score to calculate forecast skill, CRPSS, which measures the improvement over a benchmark forecast and is given in:

\[ {CRPSS}={1-}\frac{{CRPS_{fc}}}{{CRPS_{bench}}} \]

A CRPSS value of 1 indicates a perfect forecast, CRPSS > 0 shows forecasts more skilful than the benchmark, CRPSS = 0 shows forecasts are only as accurate as the benchmark, and a CRPSS < 0 warns that forecasts are less skilful than the benchmark forecast. The headline GloFAS Seasonal forecast skill score uses a CRPSS threshold of 0.5 in the summary layer in the GloFAS web map viewer, this can be interpreted as the GloFAS forecast is 50% more accurate than the benchmark forecast.

The CRPSS is calculated with GloFAS Seasonal reforecasts against a climatology benchmark foreacst and verified against GloFAS-ERA5 river discharge reanalysis as proxy observations. CRPSS headline scores are then mapped on the GloFAS map viewer, and CRPSS and CRPS time-series plots are produced for each fixed reporting point station.


Harrigan, S., Zsoter, E., Alfieri, L., Prudhomme, C., Salamon, P., Wetterhall, F., Barnard, C., Cloke, H., and Pappenberger, F., 2020. GloFAS-ERA5 operational global river discharge reanalysis 1979–present, Earth Syst. Sci. Data, 12, 2043–2060,

Hersbach, H., 2000. Decomposition of the Continuous Ranked Probability Score for Ensemble Prediction Systems. Wea. Forecasting 15, 559–570.

Pappenberger, F., Ramos, M.H., Cloke, H.L., Wetterhall, F., Alfieri, L., Bogner, K., Mueller, A., Salamon, P., 2015. How do I know if my forecasts are better? Using benchmarks in hydrological ensemble prediction. Journal of Hydrology 522, 697–713.