This section describes the way the anomaly and uncertainty of the ensemble forecast are determined, using the climatology as reference. And generally, how the probability of the 7 anomaly categories and the three uncertainty categories of the forecasts are determined. This is a generic procedure, which is the same for both EFAS and GloFAS, as it is executed the same way for each river pixel, regardless of the resolution, and also the same for the sub-seasonal and seasonal products, as it works on the weekly (sub-seasonal) or monthly (seasonal) mean discharge values the same way again.
Climatological bins and anomaly categories
Currently, the sub-seasonal climate sample uses 660 reforecast values, while the seasonal uses 500 values. From the climate sample then 99 climate percentiles are determined, which represent equally likely (1% chance) segments of the river discharge value range that occurred in the 20-year climatological sample (both sub-seasonal and seasonal currently is based on 20 years). Figure 1 shows an example generic climate distribution, either based on weekly means or monthly means, with the percentiles represented along the y-axis. Only the deciles (every 10%), the two quartiles (25%, 50% and 75%), of which the middle (50%) is also called median, and few of the extreme percentiles are indicated near the minimum and maximum of the climatological range indicated by black crosses. Each of these percentiles have an equivalent river discharge value along the x-axis. From one percentile to the next, the river discharge value range is divided into 100 equally likely bins, some of which is indicated in Figure 1, such as bin1 of values below the 1st percentile, bin2 of values between the 1st and 2nd percentiles or bin 100 of river discharge values above the 99th percentiles, etc.
Figure 1. Schematic of the forecast anomaly categories, defined by the climatological distribution.
Based on the percentiles and the related 100 bins, there are seven anomaly categories defined (Table 1). These are also indicated in Figure 1 by shading. The two most extreme categories are the bottom and top 10% of the climatological distribution (<10% as red and 90%< as blue). Then the moderately low and high river discharge categories from 10-25% (orange) and 75-90% (middle-dark blue). The smallest negative and positive anomalies are defined by 25-40% and 60-75% and displayed by yellow and light blue colours in Figure 1. Finally, the normal condition category is defined as 40-60%, so the middle 1/5th of the distribution, coloured grey in Figure 1.
Cat-1 | Extreme low | 1-10 | bottom 10% of the climatological distribution |
Cat-2 | Low | 10-25 | 15% from the 1st decile to the 1st quartile |
Cat-3 | Bit low | 25-40 | 15% from the 1st quartile to the 2nd quintile |
Cat-4 | Near normal | 40-60 | 20% from the 2nd to the 3rd quintile |
Cat-5 | Bit high | 60-75 | 15% from the 3rd quintile to the 3rd quartile |
Cat-6 | High | 74-90 | 15% from the 3rd quartile to the 9th decile |
Cat-7 | Extreme high | 90-100 | top 10% of the climatological distribution |
Table 1: Definition and description of the 7 anomaly categories.The possible value ranges in the 'Ranks column' are inclusive at the start and exclusive at the end, so for example for Cat-1 the possible ranks are 1, 2, 3, ... and 10. Depending on the products, sometimes the middle three categories (Cat3, Cat-4 and Cat-5) are combined into one extended 'Near normal' category.
Forecast extremity rank computation
The forecast has 51 ensemble members, again for both EFAS/GloFAS and both sub-seasonal or seasonal regardless. The members are all checked and placed in one of the 100 climate bins. This will be the anomaly or extremity level of the ensemble members, which can be called hereafter rank, as one of the values from 1 to 100. For example, 1 will mean the forecast value is below the 1st climate percentile (i.e. extremely anomalously low), then 2 will mean the value is between the 1st and 2nd climate percentiles (i.e. slightly less extremely low), etc., and finally 100 will mean the forecast value is above the 99th climate percentile (i.e. extremely high as higher than 99% of all the considered reforecasts representing the model climate conditions for this time of year, location and lead time).
Figure 2 shows the process of determining the ranks for each ensemble member. In this example, the lowest member gets the rank of 54 (red r54 on the graph inFigure 2) by moving vertically until crossing the climatological distribution and then moving horizontally to the y-axis to determine the two bounding percentiles and thus the right percentile bin. In this case the lowest ensemble member value is between the 53rd and 54th percentile, which is bin54. Then all ensemble members, similarly, get a bin number, the 2nd lowest values with bin60 and so on until the largest ensemble member value getting bin97, as the river discharge value is between the 96th and 97th percentiles.
Figure 2. Schematic of the forecast extremity ranking of the 51 ensemble members and the 7 anomaly categories in the context of the climatological distribution.
The probability of the 7 anomaly categories is calculated by the count of ensemble members in each category and then dividing by 51, the total number of members. In the example of Figure 2, there is no member in the 3 low flow anomaly categories, while the normal category has 2, resulting in 3.9% probability, the bit high category 13, with 27.5%, the high category 17, 33.3%, and finally the extreme high category 18 ensemble members, with 35.3% probability. The inset table in Figure 2 show the numbers and the probability, but also shows the size (in terms of probabilities) of the 7 categories. This highlights, e.g., that the normal flow category's 3.9% probability is much lower than the climatologically expected probability of 20%, however, the 3 highe flow categories have much higher probability than the climatological reference probability, especially the extreme high category, where the forecast probability (35.3%) is more than double the corresponding climatological probability (15%).
Rank computation of 0 values
The forecast extremity rank computation can be done for any value above 0 m3/s. However, it becomes undefined when the values drop to 0, as there is no way to differentiate the rank for the same value. The simulations are less reliable when we approach 0, so everything below 0.1 m3/s will be considered as 0 for the sub-seasonal and seasonal products. This problem can also happen for non-zero values, but normally the simulation should not produce a lot identical non-zero values, unless there is some specific process, like reservoir operation rule, etc., which might generate such signal. There is no indication that the non-zero constant value is an issue at all, but it is clear that the 0 values is actually a major problem, as large parts of the world has dry enough areas often combined with small enough catchments to have near zero or totally 0 river discharge values.
For the forecast rank computation in the 0-value singularity case, a special solution was developed. All the 0 ensemble member values (all below 0.1 m3/s) get an evenly-representing rank assigned from any of the percentiles that have 0 values (i.e. below 0.1 m3/s) in the model climatology. In practice, this will mean, the 'rank-undefined' section of the ensemble forecast is going to be spread evenly across the 'rank-undefined' section of the climatology during the rank computation.
Figure 3 demonstrates the process on an example, where the lowest 77 percentiles are 0 in the climatology and 23 out of 51 ensemble members are also 0 (see Figure ea). The 23 0 ensemble members then are spread across the 0-value range of the climatology from 1 to 77 (see Figure 3b). This way the ranks of the 23 members will be assigned from 1 to 77 by as equal as possible spacing in between (see Figure 3c). Finally, the remaining non-zero ensemble members also get their ranks in the usual way, as described above. The ranks of all 51 members are provided in Figure 3d.
a) | b) |
c) | d) |
Figure 3. Schematic of the forecast extremity ranking for areas with 0 river discharge values.
In the extreme case of all climate percentiles being 0, which happen over river pixels of the driest places of the world, such as the Sahara, the ensemble forecast member ranks can either be 100 for any non-zero value, regardless of the magnitude of the river discharge, or the evenly spread ranks from 1 to 100, as a representation of the totally 0 climatology. In the absolute most extreme case of all 99 climate percentiles being 0 and all 51 members being 0 in the forecast, the ranks of the forecast will be from 1 to 100 in equal representation. This means, this forecast will be a perfect representation of the climatological distribution, or with another word a perfectly 'normal' condition.
Dominant anomaly category computation for the ensemble forecast
The ensemble forecasts have 51 members, which will be assigned an extremity rank each. Using these 51 ranks the forecast needs to get assigned one of the 7 anomaly categories. This is done with the arithmetic mean of the 51 ensemble member rank values (rank-mean). This rank-mean will also be a number between 1 and 100, but this time a real (not integer) number. If the anomaly is 50.5, that is exactly the normal (median) condition, i.e. no anomaly whatsoever. If the anomaly is below 50.5, then drier than normal conditions are forecast, if above 50.5, then wetter than normal. The lower/higher the anomaly value is below 50.5, the drier/wetter the conditions are predicted to be. The lowest/highest possible value is 1/100, if all ensemble members are 1 or 100 (the most extremely dry/wet). Then, based on this rank-mean, we define the anomaly category (one of the 7 categories in Table 1) for the ensemble forecast, by placing the rank-mean into the right categories, as defined in Table 1 above. For example, all rank-mean values from 40.0 to 60.0, interpreted as 40.0<= <60.0, will be assigned as 'Near normal', or category-4.
The ensemble forecast anomaly was not based on the most probable of the 7 anomaly categories, as that would make it prone to jumpiness. For example, in the super uncertain case of 6, 8, 7, 7, 7, 9, 7 members being in each of the 7 anomaly categories, the forecast category (the dominant one) would be the 'High' category (cat-6), as that has the most members (9). However, it is likely that nearby river pixels would easily change from this distribution to an only very slightly different one with 7, 9, 7, 7, 7, 7, 7 members in each category, in which case the dominant anomaly category would be the 'Low' category (cat-2), as now that has the most (again 9) members. These two forecasts are only slightly different in terms of distribution, but the ensemble forecast anomaly categories would be almost the complete opposite of each other, making the signal look possibly very jumpy geographically. With the mean-rank definition we avoid this and simply assign the 'Near normal' category (cat-4) for both these forecasts, as the mean of the ranks are certainly very close to each other (although not checked) and both quite near the median.
There is a consequence of the 0-value problem over dry or very dry areas (described above), as some or all of the low anomaly signals will be impossible to occur. If only the lowest 10% of the climatological distribution is 0, then the ensemble forecast anomaly (defined by the rank-mean) simply can not fall into the same extreme dry category, and the lowest possible is the 'Low' category with 10-25%. Similarly, if the lowest 25% is zero in the climatology, then the lowest possible anomaly signal is 'Bit low', so the category of 25-40%. Then, if 40% is zero, then there can not be lower than 'Near normal' anomaly for the ensemble forecast. All this makes sense, as actually it does not mean anything for those dry places to have below, say, 40th percentile, in case all of those are 0, as we can not go below zero. For all these mixed or super dry areas the number and distribution of the positively anomalous ensemble members will determine whether the anomaly will stay as 'Near normal' or will increase into the high categories. If enough members will be above the non-zero climate percentiles and thus high enough fraction of the 51-member ensemble forecast will get high enough ranks, then the distribution of the 51 ensemble member ranks will show a pronounced enough shift from the neutral/normal situation and the rank-mean of them will be high enough to fall into one of the high anomaly categories.
Forecast uncertainty category computation for the ensemble forecast
In addition to the forecast anomaly and the related 7 anomaly categories, the forecast uncertainty will also be represented on the new river network and basin summary map products. The uncertainty will be represented by 2 or 3 categories, such as low/high or low/medium/high uncertainty (Table 4, still to be decided which one).
The uncertainty will be defined by the standard deviation of the ensemble member ranks. The standard deviation (std) of the even distribution with values ranging from 0 to 99 is (99-0)/sqrt(12) = 28.58, while the most extreme std value is when half of the members (say 25) are with rank 0 and the other half (say 26) with rank 99, in which case the std = sqrt( (51 * (49.5*49.5)) / 51) = 49.5. Based on these specific two extreme cases, the 2-value uncertainty categories (low/high uncertainty) should be defined as std values of <15 and 15<=, while the 3-value version (low/medium/high uncertainty) as std values of <10, 10<= <20 and 20<= (potentially to be further adjusted) as listed below in Table 4.
Cat-1 | Low uncertainty | 0-15 |
Cat-2 | High uncertainty | 15< |
Cat-1 | Low uncertainty | 0-10 |
Cat-2 | Medium uncertainty | 10-20 |
Cat-3 | High uncertainty | 20< |
Table 4: Uncertainty category definitions.
F
- For the real time forecasts, instead we will have a 51-value ensemble of ranks (-50 to 50). So, for defining the dominant category, we have options:
- Most populated: We either choose the most populated of the 7 categories. But this will be more problematic for very uncertain cases, when little shifts in the distribution could potentially mean large shift in the categories. For example assuming a distribution of 8/7/8/7/7/7/7 and 6/7/7/7/8/9/7. These two are very much possible for the longer ranges. It is a question anyway, which one to choose in the first example, Cat1 or Cat5, they are equally likely. But then, in the 2nd case we have Cat6 as winner. But the two cases are otherwise very similar.
- ENS-mean: Alternatively, we can rather use the ensemble mean, and rank the ensemble mean value and define the severity category that way. for the above example, this would not mean a big difference, as the ensemble mean is expected to be quite similar for both forecast distributions, so the two categories will also be either the same, or maybe only one apart. I think this is what we need to do!
Generation of the forecast anomaly and uncertainty signal
Climatological bins and anomaly categories
Forecast extremity rank computation
eh