Quantile-Based Weekly Guidance Maps (EXPERIMENTAL)
Why this name?
→ Charts are mainly based on forecast and re-forecast quantiles, for week-long periods. "Guidance" indicates that the contents targets users who have to convey key forecast aspects to the general public and other customers. Such customers, if suitably informed, can also interpret the charts directly. Charts are in map format.
What lead times are these forecasts for?
→ the output is for calendar weeks (00Z Monday to 24Z Sunday), in common with most other extended range products.
What variables are catered for?
→ In common with pre-existing extended range weekly mean anomaly charts, we provide these maps for (i) average 2m temperature, (ii) average surface temperature, (iii) total (7-day) precipitation, and (iv) average mean sea level pressure.
What is the information content?
→ there are two key components: a measure of the average (shown with colour-fill), and a measure of the spread of the forecast (shown by contours, and transparent grey shading). In both cases these are defined relative to characteristics of the extended range model climate.
How do we represent these components?
→ the underlying philosophy for this product is that we work with quantiles.
- So for the "average" component we plot, using colour fill, the median value of the forecast anomalies delivered by the members of Extended Range ensemble . These anomalies are expressed relative to the model climate mean (for the sites in question, for the time of year in question and for the lead time range in question). "Mean" here implies, for example, average 2m temperature over the week in question, or average accumulated rainfall over the week in question. As we use the ensemble "median" the key point for users is that (according to the ensemble) it is equally likely that the observed anomaly will lie above or below the plotted value.
- For the "spread" component we plot an "interdecile range ratio", or interdecile metric for short. This is the difference between the 10th and 90th percentiles of the forecasts of the members of the Extended Range ensemble, divided by the difference between the 10th and 90th percentiles of the Extended Range model climate (for the sites in question, for the time of year in question and for the lead time range in question). For rainfall a minor adjustment is made to avoid division by zero errors. A value of 1 for this metric means that, in regard to the spread of possible outcomes, there is little information content in the ensemble beyond what one would get from using climatology-based spread. Values <1 mean higher confidence / less uncertainty. Values >1 are rare, and indicate an unusual and curious form of information content, where the spread of possible outcomes is larger than one would get from climatology.
→ How should users interpret the average and spread fields together, in words?
The simple answer to this is shown below on Table 1. For visual examples this uses actual product snapshots, for forecast weeks 3-5, for various locations around the world - focus on the middle of each graphic panel. Examples are for 2m temperature, but the concepts apply equally to other variables.
Colour-fill | Spread Metric <1 | Spread Metric ~1 | Spread Metric >1 |
---|---|---|---|
Green contours (lower values mean higher confidence) | Black contours / transparent grey shading | Purple contours (higher values mean lower confidence) | |
Anomaly > 0 | Probably above average, and moderate/high confidence in the forecast anomaly | Probably above average, but low confidence in the actual value | Probably above average, but confidence in the actual value is very low |
Anomaly ~ 0 | Moderate/high confidence in climatologically average conditions | No signal of anything. Provide a forecast based on climatology. | Very uncertain indeed! |
Anomaly < 0 | Probably below average, and moderate/high confidence in the forecast anomaly | Probably below average, but low confidence in the actual value | Probably below average, but confidence in the actual value is very low |
Table 1: How to interpret different signal combination on the new charts (with a 2m temperature example for each class)
If the forecast anomaly is relatively large does that by itself mean confidence is higher, even if the spread metric does not suggest high confidence?
→ No. In that scenario confidence in getting an anomaly that is not zero may well be higher, but use of the spread metric in the ways described above remains fully valid.
What would you expect a pure climate change signal (i.e. warming) to look like on these charts?
→ The central panel on the top row in Table 1 represents that.
I would expect the spread metric to look a bit more noisy. How did you overcome that?
→ Whilst the colour-fill anomaly fields are plotted at full resolution (O320 ~ 36km), the resolution used for the spread metric is different. We use conservative interpolation to upscale this before plotting (to O80 ~ 144km). Although this upscaling can hide some local detail, tests suggest the impact for user interpretation is positive, and in the extended ranges relying on local details is generally questionable anyway.
Explain what is done differently for precipitation.
→ Following experimentation, and in order to avoid division by zero errors, and indeed excessive, meaningless noise in arid regions, we elected to add 1mm to the 90th percentiles of both the forecast and the model climate prior to computing the interdecile metric. This means that if the forecasts are uniformly dry in a region which is almost always dry at the specified time of year (in the model climate) then the interdecile metric will be approximately (1-0)/(1-0), that is very close to 1, and the output will look like the central cell of the nine categories shown on Table 1. The guideline there, to "provide a forecast based on climatology" remains correct. Any detrimental impact that the said adjustment will have on wetter times of year / wetter locations should generally be small, especially given that anomaly shading for precipitation does not start until +4/-4mm.
What anomalies are plotted for each variable, and what contour settings are used for the spread metric?
→ These vary according to which of the four variables are plotted, and are shown on Table 2.
- For the average component the scalings are similar to those used on pre-existing weekly mean anomaly charts, though are not identical. The key difference is that we use cut-off values for the near-normal white zone in the centre of the range (on pre-existing charts a statistical test is used instead).
- For the spread component we use different contour thicknesses and styles, as well as different colours (as on Table 1) and labels to help. Three of the four variables use the same settings. Precipitation uses different values to avoid the clutter which can arise because its distribution shape is ordinarily very different from Gaussian.
Variable | Average (colour fill) | Spread <1 (contours) | Spread ~1 (contours) | Spread >1 (contours) |
---|---|---|---|---|
2m temperature | (°C) | 0.2 (solid), 0.3 (dot), 0.5 (dash), 0.7 (solid) | 0.9 (solid), 1.1 (solid, thick) transparent grey shading in-between | 1.3 (solid), 1.5 (dash), 1.7 (dot), 2 (solid) all thick |
Surface temperature | (°C) | 0.2 (solid), 0.3 (dot), 0.5 (dash), 0.7 (solid) | 0.9 (solid), 1.1 (solid, thick) transparent grey shading in-between | 1.3 (solid), 1.5 (dash), 1.7 (dot), 2 (solid) all thick |
Mean Sea Level Pressure | (hPa) | 0.2 (solid), 0.3 (dot), 0.5 (dash), 0.7 (solid) | 0.9 (solid), 1.1 (solid, thick) transparent grey shading in-between | 1.3 (solid), 1.5 (dash), 1.7 (dot), 2 (solid) all thick |
Accumulated Precipitation | (mm) | 0.1 (dot), 0.4 (dash), 0.7 (solid) | 0.9 (solid), 1.2 (solid, thick) transparent grey shading in-between | 1.5 (solid), 2 (dash), 4 (dot) all thick |
Meaning | Outcomes above and below this value are equally likely (value is the median anomaly) | The lower the value the smaller the forecast spread and therefore the higher the confidence. | This represents spread similar to climatology, so low confidence. | This represents spread > climatology, so very low confidence overall, accentuated for higher values. |
Table 2: Divisions used for colour filling (average) and contouring (spread).
How were the definitions for the white ranges decided upon?
→ The approach was pragmatic. On the one hand we do not want anomalies that have little or no practical significance for users to show up; on the other we do not want plots that are almost perpetually white at longer leads. For temperature variables anomalies of magnitude <0.5C will not be important for most users. For mean sea level pressure anomalies of 1hPa are admittedly very small in the extratropics, but using this does facilitate signals in the tropics. For rainfall small adjustments to the white range - e.g. between 4mm and 5mm - had a large impact. Using 4mm was a compromise that provided some colour on plots for longer leads, and in temperate climates such a value is not without meaning - equating to about 17mm rainfall per month.
For the grey shading why use 1.2 as the upper limit for precipitation, but 1.1 for other variables, when a lower limit of 0.9 is used for all?
→ This was partly pragmatic, because with the value was set to 1.1 for precipitation undesirable noise appears on plots that seems hard to justify on any physical grounds. This noise is much reduced with a value of 1.2. Probably we are seeing sampling sensitivity to the exact value of the 10th wettest member, within the wet tail of the distribution. If that member is drier than the re-forecast 90th percentile, then we are in a portion of the climatological distribution that is more populated (i.e. has a cumulative distribution function with a steeper slope), which seemingly makes the noise less such that we can reasonably use 0.9, as for the other variables, rather than a lower value like 0.8.