Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

 

Section


Column

Introduction

The ECMWF operational ensemble forecasts for the western Mediterranean region exhibited high uncertainty while Hurricane Nadine was slowly moving over the eastern N.Atlantic in Sept. 2012. Interaction with an Atlantic cut-off low produced a bifurcation in the ensemble and significant spread, which controls both the track of Hurricane Nadine and the synoptic conditions downstream.

The HyMEX (Hydrological cycle in Mediterranean eXperiment) field campaign was also underway and forecast uncertainty was a major issue for planning observations during the first special observations period of the campaign.

This interesting case study examines the forecasts in the context of the interaction between Nadine and the Atlantic cut-off low in the context of ensemble forecasting. It will explore the scientific rationale for using ensemble forecasts, why they are necessary and they can be interpreted, particularly in a "real world" situation of forecasting for a observational field campaign.

 

Panel
titleThis case study is based on the following paper which is recommended reading

Pantillon, F., Chaboureau, J.-P. and Richard, E. (2015), 'Vortex-vortex interaction between Hurricane Nadine and an Atlantic cutoff dropping the predictability over the Mediterranean,   http://onlinelibrary.wiley.com/doi/10.1002/qj.2635/abstract

In this case study

In the exercises for this interesting case study we will:

  • Study the development of Hurricane Nadine and the interaction with the Atlantic cut-off low using the ECMWF analyses.
  • Study the performance of the ECMWF high resolution (HRES) deterministic forecast of the time.
  • Use the operational ensemble forecast to look at the forecast spread and understand the uncertainty downstream of the interaction.
  • Compare a reforecast using the current ( May/2016 ) ECMWF operational ensemble with the 2012 ensemble forecasts.
  • Use principal component analysis (PCA) with clustering techniques (see Pantillon et al) to characterize the behaviour of the ensembles.
  • See how forecast products were used during the HyMEX field campaign.


Column
width27%


Panel

Table of contents

Table of Contents
maxLevel1


Note

If the plotting produces thick contour lines and large labels, ensure that the environment variable LC_NUMERIC="C" is set before starting metview.




Note
titleCaveat on use of ensembles for case studies

In practise many cases are aggregated in order to evaluate the forecast behaviour of the ensemble. However, it is always useful to complement such assessments with case studies of individual events, like the one in this exercise, to get a more complete picture of IFS performance and identify weaker aspects that need further exploration.

Obtaining the exercises

The exercises described below are available as a set of Metview macros with the accompanying data. This is available as a downloadable tarfile for use with Metview (if installed). It is also available as part of the OpenIFS/Metview virtual machine, which can be run on different operating systems.

For more details of the OpenIFS virtual machine and how to get the workshop files, please contact: openifs-support@ecmwf.int.

ECMWF operational forecasts

At the time of this case study in 2012, ECMWF operational forecasts consisted of:

...

At the time of this workshop (in 2016), the ECMWF operational forecasts has been upgraded compared to 2012 and consisted of:

...

Please follow this link to see more details on changes to the ECMWF IFS forecast system (http://www.ecmwf.int/en/forecasts/documentation-and-support/changes-ecmwf-model)

Virtual machine

If using the OpenIFS/Metview virtual machine with these exercises the recommended memory is at least 6Gb, the minimum is 4Gb. If using 4Gb, do not use more than 2 parameters per plot.

These exercises use a relatively large domain with high resolution data. Some of the plotting options can therefore require significant amounts of memory. If the virtual machine freezes when running metview, please try increasing restart the VM and increase the memory assigned to the VM.

Starting up metview

To begin:

Code Block
titleType the following command in a terminal window
metview &


Info

Please enter the folder 'openifs_2016' to begin working.

Saving images and printing

 To save images during these exercises for discussion later, you can either use:

...

or use the following command to take a 'snapshot' of the screen:

Code Block
languagebash
titleCommand for screen snapshot
ksnapshot

Exercise 1. The ECMWF analysis

...

 

...

titleLearning objectives

...

Hurricane Nadine and the cut-off low

Panel
bgColorwhite
titleBGColorwhite
titleMetview icons

For these tasks, use the metview icons in the row labelled 'Analysis'

Image Modified

an_1x1.mv : this plots horizontal maps of parameters from the ECMWF analyses overlaid on one plot.

an_2x2.mv : this plots horizontal maps of parameters from the ECMWF analyses four plots to a page (two by two).

an_xs.mv : this plots vertical cross-sections of parameters from the ECMWF analyses.

Task 1: Mean-sea-level pressure and track

Right-click the mouse button on the 'an_1x1.mv' icon and select the 'Visualise' menu item (see figure right)

After a pause, this will generate a map showing mean-sea-level pressure (MSLP).

Image Removed

Image Added

Plot track of Nadine. Drag Now drag the mv_track.mv icon onto the map.

This will add the track of Hurricane Nadine. Although the full track of the tropical storm is shown from the 10-09-2012 to 04-10-2016, the ECMWF analyses (for the purpose of this study) only show 15-09-2012 to 25-09-2012.

In the plot window, use the play button in the animation controls  to animate the map and follow the development and track of Hurricane Nadine.

...

Warning
titleClose unused plot windows!

Please close any unused plot windows if using a virtual machine. This case study uses high resolution data over a relatively large domain. Multiple plot windows can therefore require significant amounts of computer memory which can be a problem for virtual machines with restricted memory.

Task 2: MSLP and 500hPa geopotential height

This task creates Figure 2. from Pantillon et al.

...

Panel
titlePlot mslp and z500

With the edit window open, find the line that defines 'plot1':

Code Block
languagebash
#Define plot list (min 1- max 4)
plot1=["mslp"]                      # use square brackets when overlaying multiple fields per plot

Change this line to:

Code Block
plot1=["z500.s","mslp"]

The suffix '.s' means plot the 500hPa geopotential as a shaded plot instead of using contours (this style is not available for all fields).

...

Click the play button and then animate the map that appears.

 


Panel
borderColorred
titleQuestions

Compare an animation of the z500 and mslp fields with Figure 1. from Pantillon et al.

1. When does the cut-off low form (see z500)?

2. How close do Nadine and the cut-off low get in the analyses?

3. Instead of z500, plot the PV at 320K (similar to Fig.13 in Pantillon). What is different about the upper level structure of Nadine and the cut-off low?

Task 3: Changing geographical area

Plot PV at 320K

Change the value of "plot1" again to animate the PV at 320K, plot1=["pv320K"]   (similar to Figure 13 in Pantillon et al).

You might add the mslp or z500 fields to this plot. e.g. plot1=["pv320K","z500"], (don't put z500.s as this will use shading not contours).

Or produce two plots, one with PV at 320K, the other with z500 or MSLP and put them side by side on the screen.


Panel
borderColorred
titleQuestions

From the animation of the z500 and mslp fields: (as in Figure 1. from Pantillon et al.)

Q. When does the cut-off low form (see z500)?
Q. From the PV at 320K (and z500), what is different about the upper level structures of Nadine and the cut-off low? (also look at the cross-section plot)

Task 3: Changing geographical area

Right-click on Right-click the mouse button on the 'an_1x1.mv' icon and select the select  'Edit' menu item.

In the edit window that appears

Panel
bgColorwhite
titleBGColorlightlightgrey
titleTwo map types are available covering a different area


Code Block
languagebash
titleChanging geographical area
#Map type: 0=Atl-an, 1: Atl-fc, 2: France 
mapType=0

With mapType=0, the map covers a large area centred on the Atlantic suitable for plotting the analyses and track of the storm (this area is only available for the analyses).

With mapType=1, the map also covers the Atlantic but a smaller area than for the analyses. This is because the forecast data in the following exercises does not cover as large a geographical area as the analyses.

With mapType=2, the map covers a much smaller region centred over France.

...

Animate the storm on this smaller geographical map.

Task 4: Wind fields, sea-surface temperature (SST)

The 'an_2x2.mv' icon allows for plotting up to 4 separate figures on a single frame. This task uses this icon to plot multiple fields.

Right-click on the 'an_2x2.mv' icon and select the 'Edit' menu item.

Code Block
languagebash
titleMultiple plots per page
#Define plot list (min 1- max 4)
plot1=["mslp"]
plot2=["wind10"]
plot3=["speed500","z500"]
plot4=["sst"]

Click the play button at the top of the window to run this macro with the existing plots as shown above.

Note that each Each plot can be a single field or overlays of different fields as in the an_1x1.mv macro.

Wind parameters can be shown either as arrows or as wind flags ('barbs') by adding '.flag' to the end of variable name e.g. "wind10.flag".

Info

Animating. If only one field on the 2x2 plot animates, make sure the menu item 'Animation -> Animate all scenes' is selected.

Plotting may be slow depending on the computer used. This reads a lot of data files.


Panel
borderColorred
titleQuestions

Q. What do you notice about the SST field?

Task 5: Satellite images

Open the folder 'satellite' by doubling clicking (scroll the window if it is not visible).

...

Use the an_1x1.mv and/or the an_2x2.mv macros to compare the ECMWF analyses with the satellite images.

 

Task 6: Cross-sections

The last task in this exercise is to look at cross-sections through Hurricane Nadine and the cut-off low.

Right click on the icon 'an_xs.mv', select 'Edit' and push the play button.

This generates a plot with a map of MSLP, a red line and underneath a cross-section plot along that red-line.

The default plot shows potential vorticity (PV), wind vectors and potential temperature roughly through the centre of the Hurricane and the cut-off low. The red line on the map of MSLP shows the location of the cross-section.

 

Panel
borderColorred

Q. Look at the PV field, how do the vertical structures of Nadine and the cut-off low differ?

Changing forecast time

Cross-section data is only available every 24hrs.

This means the 'steps' value in the macros is only valid for the times:  [2012-09-20 00:00], [2012-09-21 00:00], [2012-09-22 00:00], [2012-09-23 00:00], [2012-09-24 00:00], [2012-09-25 00:00]

Changing fields

To change the date/time of the plot, edit the macro and change the line:

Code Block
languagebash
steps=[2012-09-22 00:00]
Changing fields

A reduced number A smaller set of fields is available for cross-sections: temperature (t), potential temperature (pt), relative humidity (r), potential vorticity (pv), vertical velocity (w), wind-speed (speed; sqrt(u*u+v*v)) and wind vectors (wind3).

Changing cross-section location
Code Block
languagebash
#Cross section line [ South, West, North, East ]
line = [30,-29,45,-15]

The cross-section location (red line) can be changed in this macro by defining the end points of the line as shown above.

Remember that if the forecast time is changed, the storm centres will move and the cross-section line will need to be repositioned to follow specific features. This is not computed automatically, but must be changed by altering the coordinates above.

Panel
borderColorred
titleQuestions

Look at the PV field, how do the vertical structures of Nadine and the cut-off low differ?

 

 

Info
iconfalse

 (tick)  This completes the first exercise.

You have learnt how to use the macros, alter fields for plotting and animate fields.  The next exercises use similar macros.

 

Exercise 2: The operational HRES forecast

Recap

 

Exercise 2: The operational HRES forecast

Recap

The ECMWF operational deterministic forecast is called HRES. At the time of this case study, the model ran with a spectral resolution of T1279, equivalent to 16km grid The ECMWF operational deterministic forecast is called HRES. At the time of this case study, the model ran with a spectral resolution of T1279, equivalent to 16km grid spacing.

Only a single forecast is run at this resolution as the computational resources required are demanding. The ensemble forecasts are run at a lower resolution.

Before looking at the ensemble forecasts, first understand the performance of the operational HRES forecast of the time.

Available forecast

Data is provided for a single 5 day forecast starting from 20th Sept 2012, as used in the paper by Pantillon et al.

HRES data is provided at the same resolution as the operational model, in order to give the best representation of the Hurricane and cut-off low interationsiterations. This may mean that some plotting will be slow.

...

Available parameters

A new fieldparameter is total precipitation : tp.

The fields ( parameters ) available in the analyses are available in the forecast data.

Questions to consider

Panel
bgColorlightblue
  1. How does the HRES forecast compare to analysis and satellite images?
  2. Was it a good or bad forecast? Why?

...

Available plot types

Panel

Image Modified

For this exercise, you will use the metview icons in the row labelled 'HRES forecast' as shown above.

hres_rmse.mv             : this plots the root-mean-square-error growth curves for the operational HRES forecast compared to the ECMWF analyses.

hres_1x1.mv & hres_2x2.mv    : these work in a similar way to the same icons used in the previous task where parameters from a single lead time can be plotted either in a single frame or 4 frame per page.

hres_to_an_diff.mv     : this plots a single parameter as a difference map between the operational HRES forecast and the ECMWF analysis. Use this to understand the forecast errors.

Forecast performance

Task 1: Forecast error

In this task, we'll look at the difference between the forecast and the analysis by using "root-mean-square error" (RMSE) curves as a way of summarising the performance of the forecast.

...

Repeat for both geographical regions: mapType=1 (Atlantic) and mapType=2 (France).

Panel
borderColorred
titleQuestions

1Q. What do the RMSE curves show?
2Q. Why are the curves different between the two regions?

 

Task 2: Compare forecast to analysis

Use the hres_to_an_diff.mv icon and plot the difference map between the HRES forecast and the analysis first for z500 and then mslp (change plot1 from z500 to mslp).

 

Panel
borderColorred
titleQuestions

1Q. What differences can be seen?
2Q. How well did the forecast position the Hurricane and cut-off N.Atlantic low?

If time: look at other fields to study the behaviour of  the forecast. For example, jet position, total precipitation (tp), PV (320K)

Task 3: Precipitation over France

This task produces a plot similar to Figure 2 in Pantillon et al.

...

As for the analyses, the macros hres_1x1.mv, hres_2x2.mv and hres_xs.mv can be used to plot and animate fields or overlays of fields from the HRES forecast.

Panel
borderColorred
titleQuestions

How does the timing and distribution of the precipitation from the forecast compare to the observations shown in the paper by Pantillon?

Suggested plots for discussion

Q. Was it a good or bad forecast? Why?

Suggested plots for discussion

The following is The following is a list of parameters and plots that might be useful to produce for later group discussion. Choose a few plots and use both the HRES forecast and the analyses.

...

Panel
borderColorlightblue
bgColorlightlightblue
titleBGColorlightblue
titleIsobaric maps
  • Geopotential at 500hPa + MSLP :                                                            primary circulation, Figure 1 from Pantillon et al.
  • MSLP + 10m winds :                                                                interesting for Nadine's tracking and primary circulation
  • MSLP + relative humidity at 700hPa + vorticity at 850hPa  850hPa : low level signature of Nadine and disturbance associated with the cutoff low, with mid-level humidity of the systems.
  • Geopotential + temperature at 500hPa :                             large scale patterns, mid-troposphere position of warm Nadine and the cold Atlantic cutoff
  • Geopotential + temperature at 850hPa :                                         lower level conditions, detection of fronts
  • 320K potential vorticity (PV) + MSLP, and,
  • 500hPa relative vorticity (see Fig. 14 in Pantillon) :                upper level conditions, upper level jet and the cutoff signature in PV, interaction between Nadine and the cut-off low.
  • Winds at 850hPa + vertical velocity at 700hPa (+MSLP) : focus on moist and warm air in the lower levels and associated vertical motion. Should not be a strong horizontal temperature gradient around Nadine, the winds should be stronger for Nadine than for the cutoff.
  • 10m winds + total precipitation (+MSLP) :                        compare with Pantillon Fig.2., impact on rainfall over France.

...

Panel
bgColorlightlightblue
titleBGColorlightblue
titleVertical cross-sections
  • Potential temperature + potential vorticity
,
  • :          to characterize the cold core and warm core structures of Hurricane Nadine and the cut-off low.
  • Humidity + vertical motion :                                
         to characterize
  • another view of the cold core and warm core structures of Hurricane Nadine and the cut-off low.
  • PV + vertical velocity (+ relative humidity) :        a classical cross-section to see if a PV anomaly is accompanied with vertical motion or not.

 

Info
iconfalse

 (tick)  This completes the second exercise.

You have seen how the ECMWF operational HRES forecast of 2012-09-20 00Z performed compared to the analysis. The next exercises look at the ECMWF ensemble.

...

Exercise 3 : The operational ensemble forecasts

Recap

  • ECMWF operational ensemble forecasts treat uncertainty in both the initial data and the model.
  • Initial analysis uncertainty: sampled by use of Singular Vectors (SV) and Ensemble Data Assimilation (EDA) methods. Singular Vectors are a way of representing the fastest growing modes in the initial state.
  • Model uncertainty: sampled by use of stochastic parametrizations. In IFS this means the 'stochastically perturbed physical tendencies' (SPPT) and the 'spectral backscatter scheme' (SKEB)
  • Ensemble mean : the average of all the ensemble members. Where the spread is high, small scale features can be smoothed out in the ensemble mean.
  • Ensemble spread : the standard deviation of the ensemble members, represents how different the members are from the ensemble mean.

The ensemble forecasts

In this case study, there are two operational ensemble datasets and additional datasets created with the OpenIFS model, running at lower resolution, where the initial and model uncertainty are switched off in turn. The OpenIFS ensembles are discussed in more detail in later exercises. Please see below for more details.

An ensemble forecast consists of:

  • Control forecast (unperturbed)
  • Perturbed ensemble members. Each member will use slightly different initial data conditions and include model uncertainty pertubations.

2012 Operational ensemble

ens_oper: This dataset is the operational ensemble from 2012 and was used in the Pantillon et al. publication. A key feature of this ensemble is use of a climatological SST field (you should would have seen this in the earlier tasks!).

2016 Operational ensemble

ens_2016: This dataset is a reforecast of the 2012 event using the ECMWF operational ensemble from of March 2016. Two key differences between the 2016 and 2012 operational ensembles are: higher horizontal resolution, and coupling of NEMO ocean model to provide SST from the start of the forecast.

Note that the The analysis was not rerun for 20-Sept-2012. This means the reforecast using the 2016 ensemble will be using the original 2012 analyses. Also only 10 ensemble data assimilation (EDA) members were used in use at that time2012, whereas 25 would be used in the are in use for 2016 operational ensembleensembles, so each EDA member will be used multiple times for this reforecast. This will impact on the spread and clustering seen in the tasks in this exercise.

Ensemble exercise tasks

Visualising ensemble forecasts can be done in various ways. During this exercise , we will use a number of visualisation techniques in order to understand the errors and uncertainties in the forecast, we will use a number of visualisation techniques.

Key parameters: MSLP and z500, z500, and total precipitation (tp) over France.  We suggest concentrating on viewing these fields. If time, visualize other parameters (e.g. PV320K).

Image Added

Available plot types

Panel

Image Modified

For these exercises please use the Metview icons in the row labelled 'ENS'.

ens_rmse.mv : this is similar to the hres_rmse.mv in the previous exercise. It will plot the root-mean-square-error growth for the ensemble forecasts.

ens_to_an.mv : this will plot (a) the mean of the ensemble forecast, (b) the ensemble spread, (c) the HRES deterministic forecast and (d) the analysis for the same date.

ens_to_an_runs_spag.mv : this plots a 'spaghetti map' for a given parameter for the ensemble forecasts compared to the analysis. Another way of visualizing ensemble spread.

stamp.mv : this plots all of the ensemble forecasts for a particular field and lead time. Each forecast is shown in a stamp sized map. Very useful for a quick visual inspection of each ensemble forecast.

stamp_diff.mv : similar to stamp.mv except that for each forecast it plots a difference map from the analysis. Very useful for quick visual inspection of the forecast differences of each ensemble forecast.

 

Additional plots for further analysis:

Image RemovedImage Added

pf_to_cf_diff.mv : this useful macro allows two individual ensemble forecasts to be compared to the control forecast. As well as plotting the forecasts from the members, it also shows a difference map for each.

ens_to_an_diff.mv : this will plot the difference between the ensemble control, ensemble mean or an individual ensemble forecast member and the analysis for a given parameter.

Group working

If working in groups, each group could follow the tasks below with a different ensemble forecast. e.g. one group uses the 'ens_oper', another group uses 'ens_2016' and so on.

Choose your ensemble dataset by setting the value of 'expId':, either 'ens_oper' or 'ens_2016' for this exercise.

One of the OpenIFS ensembles could also be used but it's recommended one of the operational ensembles is studied first.

Code Block
languagebash
titleEnsemble forecast datasets available in the macros
#The experiment. Possible values are:
# ens_oper = operational ENS
# ens_2016 = 2016 operational ENS
# ens_both = OpenIFS (EDA+SV+SPPT+SKEB)
# ens_initial = OpenIFS (EDA+SV)
# ens_model = OpenIFS (SPPT+SKEB) 
expId="ens_oper"
expId="ens_oper"

Ensemble forecast performance

In these tasks, the performance of the ensemble forecast is studied.

Panel
borderColorred
titleOverall questions to consider

Q. How does the ensemble mean MSLP and Z500 fields compare to the HRES forecast and analysis?
Q. Examine the initial diversity in the ensemble and how the ensemble spread and error growth develops.  What do the extreme forecasts look like?
Q. Are there any members that consistently provide a better forecast?
Q. Comparing the two ensembles, ens_oper and ens_2016, which is the better ensemble for this case study?

Task 1: RMSE "plumes"

This is similar to task 1 in exercise 2, except now the RMSE curves for all the ensemble members from a particular forecast will be plotted.

Using Right-click the ens_rmse.mv icon, right-click, select 'Edit' and plot the curves first for 'mslp' and then for 'z500' .(change the param field to mslp, run the macro and then change to z500 and run again).

Change 'expID' for your choice of  ensemble.

Code Block
languagebash
titleMake sure 'clustering' is off for this task!
useClustersclustersId="off"

Clustering will be used in later tasks.

Panel
borderColorred
titleQuestions

Q. How do the HRES, ensemble control forecast and ensemble mean compare?
Q. How do the ensemble members behave, do they give better or worse forecasts?
Q. Compare with ens_2016 ensemble, or with a different group. How does the performance of the HRES and control forecast change?

There might be some evidence of clustering in the ensemble plumes.

...

  • Explore the plumes from other variables.
  • Do you see the same amount of spread in RMSE from other pressure levels in the atmosphere?

Task 2: Ensemble spread

In the previous task, we have seen that introducing uncertainty into in the forecast by starting from different initial conditions and enabling the stochastic parameterizations in IFS can result in significant differences in the RMSE (for this particular case and geographical region).

The purpose of this task is to This task will explore the difference in more detail and look in particular at another way by looking at the 'ensemble spread'.

Use the ens_to_an.mv icon and first plot the MSLP and then z500 (set param to mslp, run the macro, then change param to z500 and run again).

wind fields. This will produce plots showing: the mean of  of all the ensemble forecasts, the spread of the ensemble forecasts, the operational HRES deterministic forecast and the analysis.

Change 'expId' if required.

Animate this plot to see how the spread grows.

This macro can also be used to look at collections clusters of ensemble members. It will be used later in the clustering exercisestasks. For this task, make sure all the members of the ensemble are used.

Code Block
languagebash
titleUse all ensemble members in this task:
#ENS members (use ["all"] or a list of members like [1,2,3]
members=["all"]        #[1,2,3,4,5] or #["all"] or ["cl1cl.example.1"]


Panel
borderColorred
titleQuestions

Q. How does the mean of the ensemble forecasts compare to the HRES & analysis?
Q. Does the ensemble spread capture the error in the forecast?
Q. What other comments can you make about the ensemble spread?

If time:

  • set the 'members=' option to change the number of members in the spread plots.
    e.g. try a "reduced" ensemble by only using the first 5 ensemble members: "members=[1,2,3,4,5]".

Task 3: Spaghetti plots - another way to visualise spread

Task 3: Spaghetti plots - another way to visualise spread

A "spaghetti" A "spaghetti" plot is where a single contour of a parameter is plotted for all ensemble members. It is another way of visualizing the differences between the ensemble members and focussing on features.

Use the ens_to_an_runs_spag.mv icon. Plot and animate either the MSLP field using the default value or z500 fields using your suitable choice for the contour level. This will indicate Find a value that highlights the low pressure centrecentres. Note that not all members may reach the low pressure set by the contour.

Note that this macro may animate slowly because of the computations required.

Info

If the contour value is not set correctly, no lines will appear. Use the 'cursor data' icon Image Added at the top of the plot to inspect the data values.

The red contour line shows the control forecast of the ensemble.

Note that this macro may animate slowly because of the computations required.

Experiment with changing the contour value and (Experiment with changing the contour value and (if time) plotting other fields.

Task 4: Visualise ensemble members and

...

differences

So far we have been looking at reducing the information in some way to visualise the ensemble.

To Stamp maps are used to visualise all the ensemble members as normal maps, we can use stamp maps. These are small, stamp sized contour maps plotted for each ensemble member using a small set of contours.

There are two icons to use, stamp.mv and stamp_diff.mv. Plot the MSLP parameter for the ensemble. Repeat for wind field.

Q. Using the stamp and stamp difference maps, study the ensemble. Identify which ensembles produce "better" forecasts.
Q. Can you see any distinctive patterns in the difference maps? Are the differences similar in some way?

If time:

Use the macros to see how the perturbations are evolving; use ens_to_an_diff.mv to compare individual members to the analyses.

Find ensemble members that appear to produce a better forecast and look to see how the initial development in these members differs. Start by using a single lead time and examine the forecast on the 28th.

  • Select 'better' forecasts using the stamp plots and use ens_to_an.mv to modify the list of ensembles plots. Can you tell which area is more sensitive in the formation of the storm?
  • use the pf_to_cf_diff macro to take the difference between these perturbed ensemble member forecasts from the control to also look at this.
Info

Use 'mapType=1' to see the larger geographical area (please note that due to data volume restrictions, this mapType only works for the MSLP parameter).

Task 5:  Cumulative distribution function at different locations

Recap

The probability distribution function of the normal distribution
or Gaussian distribution. The probabilities expressed as a
percentage for various widths of standard deviations (σ)
represent the area under the curve.

Image Removed

Figure from Wikipedia.

Cumulative distribution function for a normal
distribution with varying standard deviation (σ)

Image Removed

Figure from Wikipedia.

Cumulative distribution function (CDF)

The figures above illustrate the relationship between a normal distribution and its associated cumulative distribution function.The CDF is constructed from the area under the probability density function.

The CDF gives the probability that a value on the curve will be found to have a value less than or equal to the corresponding value on the x-axis. For example, in the figure, the probability for values less than or equal to X=0 is 50%.

The shape of the CDF curve is related to the shape of the normal distribution. The width of the CDF curve is directly related to the value of the standard deviation of the probability distribution function. For our ensemble, the width is then related to the 'ensemble spread'.

For a forecast ensemble where all values were the same, the CDF would be a vertical straight line.

Plot the CDF for 3 locations

Image Removed

This exercise uses the cdf.mv icon. Right-click, select 'Edit' and then:

...

Q. What is the difference between the different stations and why? (refer to the ensemble spread maps to answer this)
Q. How does the CDF for Reading change with different forecast lead (run) dates?

 Forecasting an event using an ensemble : Work in teams for group discussion

Ensemble forecasts can be used to give probabilities to a forecast issued to the public.

Panel
titleForecast for HyMEX
To be done...

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Notes from Frederic: email 7/4/16

 

day 2

1) T1279 Analysis 0920 + t+96 deterministic forecast 0924 (t+96h) --> focusing on the interaction between Nadine and the cutoff. Maybe an extra plot of the forecasted rainfall at t+96 over France ?
2) Ens T639 forecasts : I saw that T639 is the 2012 operational ensemble resolution, so we will see the same bifurcation in the scenarios as explained in Pantillon : the visualization of the spread, the plumes, the spaghettis, ... will help here. I am sure you have great ideas on this topic. Maybe we can propose some horizontal maps of each (or some) members ?
3) PCA and clustering : if you manage to put it in Metview this will be great lo look at the 2 distinct patterns. I asked Florian Pantillon his NCL sources to do the trick. I'll use it to build an extra NCL exercice with PCA, clustering and compositing, if we have time. The file format needed will be netcdf.
4) Ensemble runs : initial (EDA+SV) and model (SPPT+SKEB) : same as last year

See above. My preference after talking with people here is to use the comparison between 2012 operational ensemble and 2016 operational ensemble. The lower res (T319) ensembles; control (EDA+SV), (SPPT_SKEB) ensembles for this case are running now and we can include the data (as long as filesize does not become an issue). But honestly, I do not think there will be time. I will leave it to you to decide!

 
day 3
Etienne's presentation in the morning
SCM experiments

...

Exercise 2.

Task 2 : compare forecast to analysis
Task 3 : visualize ensembles (plumes, ensemble spread, spaghetti, stamp, CDF)
--> These 3 tasks from last year are very interesting. To gain time maybe that we should put a group on each item for task 3 or suppress task 2 ? The CDF adds a "statistical" taint to the workshop do you think we can adapt it to our case ?
Task 4 : PCA and clustering
Figure 5 shows that EOF1 accounts for 3/4 of the variance. This dipole pattern is typical when tropical interact with mid latitudes. No need to spend a lot of time on this.
Figure 6 is much more interesting. It Allows to see that we can choose 2 clusters containing approximately half of the members. The deterministic forecast is close to the two outliers and the control and the analysis belong to cluster 1.
From Figure 7 we see that cluster 1 corresponds to a cutoff moving eastward over Europe and cluster 2 to a weak ridge over western Europe.
It would be great if we could also do the cluster composite of rainfall from Figure 8 : cluster 1 shows impact on precipitation over The Cévènes whereas cluster 2 shows weak precipitation over the Cévènes.
The plot of the cluster member tracks of Nadine and the cutoff from figure 10 is also very interesting to me, I think we should do it.  We see more clearly that cluster 1 exhibits a weak interaction between cutoff and low and cutoff over Europe. In cluster 2, there is a strong interaction between the cutoff and Nadine and Nadine makes landfall over the Iberian penisula (in model world, is it realistic ?). I don't know if the tracking is easy to do in Metview as it implies to track the cutoff and the low for each member.
Like you said in a previous mail, there is a possibility of interactivity for figures 7 (MSLP and Z500 composites) 8 (wind and RR composite) and 10 (member track). We have to identify by a number the cluster members and if make the students group the members to create the cluster composites. I think it is a good idea.
Task 5 : Sensitivity experiments to the SST coupling

Extended deterministic forecast : 20-28 September just for MSLP : Etienne told me that the ECMWF model of the 20 000UTC proposed a very extreme situation on the 28th, with a storm over Gibraltar. This would be a way to illustrate the limits of a deterministic approach.

Satellite : we have the satellite images of the situation (IR, WV, cloud classification, IR-Visible composite). We can send them to you to put on the VM.

.

 

Before leaving for a long weekend and maybe more, here is some input about the practical session on the 2nd day :
Task 1 : forecast error
Task 2 : compare forecast to analysis
Task 3 : visualize ensembles (plumes, ensemble spread, spaghetti, stamp, CDO)
--> These 3 tasks from last year are very interesting. To gain time maybe that we should put a group on each item for task 3 or suppress task 2 ? The CDO adds a "statistical" taint to the workshop do you think we can adapt it to our case ?
Task 4 : PCA and clustering
If not possible in Metview I can make the students plot with NCL figures 5, 6 and 7 from Pantillon.
Figure 5 shows that EOF1 accounts for 3/4 of the variance. This dipole pattern is typical when tropical interact with mid latitudes. No need to spend a lot of time on this.
Figure 6 is much more interesting. It Allows to see that we can choose 2 clusters containing approximately half of the members. The deterministic forecast is close to the two outliers and the control and the analysis belong to cluster 1.
From Figure 7 we see that cluster 1 corresponds to a cutoff moving eastward over Europe and cluster 2 to a weak ridge over western Europe.
It would be great if we could also do the cluster composite of rainfall from Figure 8 : cluster 1 shows impact on precipitation over The Cévènes whereas cluster 2 shows weak precipitation over the Cévènes.
The plot of the cluster member tracks of Nadine and the cutoff from figure 10 is also very interesting to me, I think we should do it.  We see more clearly that cluster 1 exhibits a weak interaction between cutoff and low and cutoff over Europe. In cluster 2, there is a strong interaction between the cutoff and Nadine and Nadine makes landfall over the Iberian penisula (in model world, is it realistic ?). I don't know if the tracking is easy to do in Metview as it implies to track the cutoff and the low for each member.
Like you said in a previous mail, there is a possibility of interactivity for figures 7 (MSLP and Z500 composites) 8 (wind and RR composite) and 10 (member track). We have to identify by a number the cluster members and if make the students group the members to create the cluster composites. I think it is a good idea.
Task 5 : Sensitivity experiments to the SST coupling

As I am writing I am beginning to wonder if we should not make 2 groups : one for task 4 and one for task 5. Tasks 1-3 would be for all students. This would allow to keep the CDO task. What do you think ?

On 04/05/16 15:25, FERRY Frédéric wrote:

*T1279 Analysis* : 20121020 0000UTC to 20121025 0000UTC --> Only the 20
september analysis will be looked at but I assume you need to get the
other analysis to compute the RMSE in day 2 ?
*Extended analysis* : 15-20 September just for MSLP and T2m (or better
the SST) --> Nadine tracking before the 20th

...

*Extended deterministic forecast* : 20-28 September just for MSLP :
Etienne told me that the ECMWF model of the 20 000UTC proposed a very
extreme situation on the 28th, with a storm over Gibraltar. This would
be a way to illustrate the limits of a deterministic approach.

...

All ok apart from:

3 : Equivalent potential temperature at 850 hPa  + winds at 850 hPa +
vertical velocity at 600hPa + MSLP in background --> focussing on the

Can we use 700hPa VV instead of 600, to be consistent with data on other levels? We will need VV on multiple levels in order to plot the x-sections (see below), though these will only be available 00Z on each day. The horiz. maps will have VV available 6hrly but on selected levels only (we're proposing 200, 500, 700, 850).

Proposed tasks for Day 1 :
5 : Beyond D+5 deterministic scenario : MSLP only

...

Concerning the ensemble runs, 6 hourly data is OK. If you have space on
the VM it would be interesting to go up to D+10 (or D+15). This would
allow to try and look at the extreme member over Gibraltar on the 28
September.

...

.

Use stamp.mv to first plot the MSLP and then z500 fields in the ensemble (set param='mslp', run the macro, then change to 'z500' and run again).

The stamp map is slow to plot as it reads a lot of data. Rather than animate each forecast step, a particular date can be set by changing the 'steps' variable.

Code Block
languagebash
titleSet date/time to 24-09-2012 00Z
#Define forecast steps
steps=[2012-09-24 00:00,"to",2012-09-24 00:00,"by",6]

Make sure clustersId="off" for this task.

Precipitation over France

Use stamp.mv and plot total precipitation ('tp') over France (mapType=2) for 00Z 24-09-2012 (compare with Figure 2 in Pantillon).

Note, stamp_diff.mv cannot be used for 'tp' as there is no precipitation data in the analyses.

Difference stamp maps

Use the stamp_diff.mv plot to look at the differences between the ensemble members and the analysis. It can be easier to understand the difference in the ensembles by using difference stamp maps.

Panel
borderColorred

Q. Using the stamp and stamp difference maps, study the ensemble. Identify which ensembles produce "better" forecasts.
Q. Can you see any distinctive patterns in the difference maps?

Compare ensemble members to analysis

After visualizing the stamp maps, it can be useful to animate a comparison of individual ensemble members to the analyses.

ens_to_an_diff.mv and pf_to_cf_diff.mv can be used to compare ensemble members.

Panel
titleUse ens_to_an_diff to compare an ensemble member to the analysis

 To animate the difference in MSLP of an individual ensemble member 30 to the analysis, edit the lines:

Code Block
param="mslp"
ensType="pf30"

To compare the control forecast:

Code Block
ensType="cf"


Further analysis using ensembles
Panel
titleUse pf_to_cf_diff.mv to compare two ensemble members to the control forecast

This will show the forecasts from the ensemble members and also their difference with the ensemble control forecast.

To animate the difference in MSLP with ensemble members '30' and '50', set:

Code Block
param="mslp"
pf=[30,50]



Panel
titleSea-surface temperature

Compare the SST parameter used for the ens_oper and ens_2016 ensemble forecasts. The 2016 reforecast of this case study used a coupled ocean model unlike the 2012 ensemble and HRES forecast that used climatology for the first 5 days.


Panel
titleCross-sections of ensemble members

To show a cross-section of a particular ensemble member, use the macro 'ens_xs.mv'.

This works in the same way as the an_xs.mv and hres_xs.mv macros.


Panel
titleIdentifying sensitive region for better forecasts

Find ensemble members that appear to produce a better forecast and look to see how the initial development in these members differs.

  • Select 'better' forecasts using the stamp plots and use ens_to_an.mv to modify the list of ensemble plots.
  • Use pf_to_cf_diff and ens_to_an_diff to take the difference between these perturbed ensemble member forecasts from the control and analyses to also look at this.
Panel
borderColorred

Q. Can you tell which area is more sensitive for the forecast?


 

Task 5:  Cumulative distribution function

Recap

The probability distribution function of the normal distribution
or Gaussian distribution. The probabilities expressed as a
percentage for various widths of standard deviations (σ)
represent the area under the curve.

Image Added

Figure from Wikipedia.


Cumulative distribution function for a normal
distribution with varying standard deviation (σ)

Image Added

Figure from Wikipedia.

Cumulative distribution function (CDF)

The figures above illustrate the relationship between a normal distribution and its associated cumulative distribution function. The CDF is constructed from the area under the probability density function.

The CDF gives the probability that a value on the curve will be found to have a value less than or equal to the corresponding value on the x-axis. For example, in the figure, the probability for values less than or equal to X=0 is 50%.

The shape of the CDF curve is related to the shape of the normal distribution. The width of the CDF curve is directly related to the value of the standard deviation of the probability distribution function.

For an ensemble, the width is therefore related to the 'ensemble spread'.

For a forecast ensemble where all values were the same, the CDF would be a vertical straight line.

Plot the CDFs

Image Added

This exercise uses the cdf.mv icon. Right-click, select 'Edit' and then:

  • Plot the CDF of MSLP for Toulouse for your choice of ensemble
  • Find a latitude/longitude point in the area of intense precipitation on 12Z 24/9/2012 (see Figure 2(c) Pantillon et al) and plot the CDF for MSLP (set station=[lat,lon] in the macro cdf.mv)

Note that only MSLP, 2m temperature (t2) and 10m wind-speed (speed10) are available for the CDF.

Make sure useClusters='off'.

Panel
borderColorred

Q. Compare the CDF from the different forecast ensembles; what can you say about the spread?

Exercise 4: Cluster analysis

The paper by Pantillon et al describes the use of clustering to identify the main scenarios among the ensemble members.

This exercise repeats some of the plots from the previous one but this time with clustering enabled.

Using clustering will highlight the ensemble members in each cluster in the plots.

In this exercise you will:

  • Construct your own qualitative clusters by choosing members for two clusters
  • Generate clusters using principal component analysis (similar to Pantillon et al).

Task 1: Create your own clusters

Clusters can be created manually from lists of the ensemble members.

Choose members for two clusters. The stamp maps are useful for this task.

From the stamp map of z500 at 24/9/2012 (t+96), identify ensemble members that represent the two most likely forecast scenarios.

It is usual to create clusters from z500 as it represents the large-scale flow and is not a noisy field. However, for this particular case study, the stamp map of 'tp' (total precipitation) over France is also very indicative of the distinct forecast scenarios. You might also try using other fields, such as 'mslp' or 'pv320K' to compare.

Panel
titleCreate your own clusters

Right-click 'ens_oper_cluster.example.txt' and select Edit (or make a duplicate)

The file contains two example lines:

Code Block
1#   2  3  4  9  22 33 40
2#   10 11 12 31 49

The first line defines the list of members for 'Cluster 1': in this example, members 2, 3, 4, 9, 22, 33, 40.

The second line defines the list of members for 'Cluster 2': in this example, members 10, 11, 12, 31, 49.

Change these two lines!.
Put your choice of ensemble member numbers for cluster 1 and 2 (lines 1 and 2 respectively).

You can create multiple cluster definitions by using the 'Duplicate' menu option to make copies of the file for use in the plotting macros..

The filename is important!
The first part of the name 'ens_oper' refers to the ensemble dataset and must match the expID name used in the plotting macro. 
The 'example' part of the filename can be changed to your choice and should match the 'clustersId' value in the plotting macro.
As an example, a filename of: ens_2016_cluster.fred.txt would require 'expId=ens_2016', 'clustersId=fred' in the macro.


Panel
titlePlot ensembles with your cluster definitions

Use the clusters of ensemble members you have created in ens_oper_cluster.example.txt.

Change clustersId='example' in each of the ensemble plotting macros to enable cluster highlighting.

If you are looking at the 2016 reforecast, then make sure your file is called ens_2016_cluster.example.txt.

Replot ensembles:

RMSE: plot the RMSE curves using ens_rmse.mv. This will colour the curves differently according to which cluster they are in.

Stamp maps: the stamp maps will be reordered so the ensemble members will be grouped according to their cluster. Applies to stamp.mv and stamp_diff.mv. This will make it easier to see the forecast scenarios according to your clustering.

Spaghetti maps: with clusters enabled, two additional maps are produced which show the contour lines for each cluster.


Panel
titlePlot maps of parameters as clusters

The macro cluster_to_an.mv can be used to plot maps of parameters as clusters and compared to the analysis and HRES forecasts.

Use cluster_to_an.mv to plot z500 maps of your two clusters (equivalent to Figure 7 in Pantillon et al.)

If your cluster definition file is called 'ens_oper_cluster.example.txt', then Edit cluster_to_an.mv and set:

Code Block
languagebash
#ENS members (use ["all"] or a list of members like [1,2,3]
members_1=["cl.example.1"]
members_2=["cl.example.2"]

If your cluster definition file is has another name, e.g. ens_oper_cluster.fred.txt, then members_1=["cl.fred.1"].

Plot other parameters:

Plot total precipitation 'tp' for France (mapType=2). (Figure 8. in Pantillon et al.)


Panel
borderColorred

Q. Experiment with the choice of members in each clusters and plot z500 at t+96 (Figure 7 in Pantillon et al.). How similar are your cluster maps?
Q. What date/time does the impact of the different clusters become apparent?
Q. Are two clusters enough? Where do the extreme forecasts belong?

Task 2: Empirical orthogonal functions / Principal component analysis

A quantitative way of clustering an ensemble is by a principal component analysis using empirical orthogonal functions. These are computed from the differences between the ensemble members and the ensemble mean, then computing the eigenvalues and eigenfunctions of these differences (or variances) over all the members such that the difference of each member can be expressed as a linear combination of these eigenfuctions, also known as empirical orthogonal functions (EOFs).

Although geopotential height at 500hPa at 00 24/9/2012 is used in the paper by Pantillon et al. as it gives the best results, the steps described below can be used for any parameter at any step.

Image Added

The eof.mv macro computes the EOFs and the clustering.

Warning

Always first use the eof.mv first for a given parameter, step and ensemble forecast (e.g. ens_oper or ens_2016) to create the cluster file.

Otherwise cluster_to_an.mv and other plots with clustering enabled will fail or plot with the wrong clustering of ensemble members.

If you change step or ensemble, recompute the EOFS and cluster definitions using eof.mv. Note however, that once a cluster has been computed, it can be used for all steps with any parameter.

If you rerun the eof.mv macro, it will write to a new file called for example 'ens_oper.eof.txt.latest' if the original file still exists. Make sure you rename this file to 'ens_oper.eof.txt' otherwise the plotting macros will continue to use the original ens_oper.eof.txt.


Panel
titleCompute EOFs and clusters

Edit 'eof.mv'

Set the parameter to use, choice of ensemble and forecast step required for the EOF computation:

Code Block
param="z500"
expId="ens_oper"
steps=[2012-09-24 00:00]

Run the macro.

The above example will compute the EOFs of geopotential height anomaly at 500hPa using the 2012 operational ensemble at forecast step 00Z on 24/09/2012.

A plot will appear showing the first two EOFs (similar to Figure 5 in Pantillon et al.)

The geographical area for the EOF computation is: 35-55N, 10W-20E (same as in Pantillon et al). If desired it can be changed in eof.mv.


Panel
titleEOF cluster definition file

The eof.mv macro will create a text file with the cluster definitions, in the same format as described above in the previous task.

The filename will be different, it will have 'eof' in the filename to indicate it was created by using empirical orthogonal functions.

Code Block
languagebash
titleCluster filename created for ensemble 'ens_oper' using eof.mv
ens_oper_cluster.eof.txt

If a different ensemble forecast is used, for example ens_2016, the filename will be: ens_2016_cluster.eof.mv

This cluster definition file can then be used to plot any variable at all steps (as for task 1).


Panel
borderColorred

Q. What do the EOFs plotted by eof.mv show?
Q. Change the parameter used for the EOF (try the total precipitation 'tp' field). How does the cluster change?


 

Panel
titlePlot ensemble and cluster maps

Use the cluster definition file computed by eof.mv to the plot ensembles and maps with clusters enabled (as described for task 1, but this time with the 'eof' cluster file).

The macro cluster_to_an.mv can be used to plot maps of parameters as clusters and compared to the analysis and HRES forecasts.

Use cluster_to_an.mv to plot z500 and MSLP maps of the two clusters created by the EOF/PCA analysis (equivalent to Figure 7 in Pantillon et al.)

Edit cluster_to_an.mv and set:

Code Block
languagebash
#ENS members (use ["all"] or a list of members like [1,2,3]
members_1=["cl.eof.1"]
members_2=["cl.eof.2"]

Run the macro.

If time also look at the total precipitation (tp) over France and PV/320K.

 

Cluster 1 corresponds to a cutoff low moving eastward over Europe and cluster 2 to a weak ridge over western Europe. Cluster 1 exhibits a weak interaction between Nadine and the cut-off low over Europe. In cluster 2, there is a strong interaction between the cutoff and Nadine in which Nadine makes landfall over the Iberian penisula.
Panel
borderColorred

Q. How similar is the PCA computed clusters to your manual clustering?
Q. Which cluster best represents the analysis?
Q. How useful is the cluster analysis as an aid to forecasting for HyMEX?
Q. Change the date/time used to compute the clusters. How does the variance explained by the first two clusters change?  Is geopotential the best parameter to use?
Q. Compare ens_oper and ens_2016 (or with a different group). How do the clusters differ and why?


Panel
titleChanging the number of clusters

To change the number of clusters created by the EOF analysis, find the file in the folder 'base' called base_eof.mv.

Edit this file and near the top, change:

Code Block
  clusterNum=2

to

Code Block
  clusterNum=3

then select 'File' and 'Save' to save the changes.

Now if you run the eof.mv macro, it will generate a text file, such as ens_oper.eof.txt with 3 lines, one for each cluster. It will also show the 3 clusters as different colours.

You can use the 3 clusters in the cluster_to_an.mv macro, for example:

Code Block
param="z500.s"
expId="ens_oper"
members_1=["cl.eof.1"]
members_2=["cl.eof.3"]

would plot the mean of the members in the first and the third clusters (it's not possible to plot all three clusters together).


Panel
titleCluster method code

For those interested:

The code that computes the clusters can be found in the Python script: aux/cluster.py.

This uses the 'ward' cluster method from SciPy. Other cluster algorithms are available. See http://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.linkage.html#scipy.cluster.hierarchy.linkage

The python code can be changed to a different algorithm or the more adventurous can write their own cluster algorithm!

Exercise 5. Percentiles and probabilities

To further compare the 2012 and 2016 ensemble forecasts, plots showing the percentile amount and probabilities above a threshold can be made for total precipitation.

Use these icons:

Image Added

Both these macros will use the 6-hourly total precipitation for forecast steps at 90, 96 and 102 hours, plotted over France.

Task 1. Plot percentiles of total precipitation

Edit the percentile_tp_compare.mv icon.

Set the percentile for the total precipitation to 75%:

Code Block
languagebash
#The percentile of ENS precipitation forecast
perc=75

Run the macro and compare the percentiles from both the forecasts.  Change the percentiles to see how the forecasts differ.

Task 2: Plot probabilities of total precipitation

This macro will produce maps showing the probability of 6-hourly precipitation for the same area as in Task 1.

In this case, the maps show the probability that total precipitation exceeds a threshold expressed in mm.

Edit the prob_tp_compare.mv and set the probability to 20mm:

Code Block
languagebash
#The probability of precipitation greater than
prob=20

Run the macro and view the map. Try changing the threshold value and run.

Panel
borderColorred

Q. Using these two macros, compare the 2012 and 2016 forecast ensemble. Which was the better forecast for HyMEX flight planning?

Exercise 6. Exploring the role of uncertainty

To further understand the impact of the different types of uncertainty (initial and model), some forecasts with OpenIFS have been made in which the uncertainty has been selectively disabled. These experiments use a 40 member ensemble and are at T319 resolution, lower than the operational ensemble.

As part of this exercise you may have run OpenIFS yourself in the class to generate another ensemble; one participant per ensemble member.

Recap

    • EDA is the  Ensemble Data Assimilation.
    • SV is the use of Singular Vectors to perturb the initial conditions.
    • SPPT is the stochastic physics parametrisation scheme.
    • SKEB is the stochastic backscatter scheme applied to the model dynamics.

Experiments available:

  • Experiment id: ens_both.  EDA+SV+SPPT+SKEB  : Includes initial data uncertainty (EDA, SV) and model uncertainty (SPPT, SKEB)
  • Experiment id: ens_initial.  EDA+SV only  : Includes only initial data uncertainty
  • Experiment id: ens_model. SPPT+SKEB only : Includes model uncertainty only

The aim of this exercise is to use the same visualisation and investigation as in the previous exercises to understand the impact the different types of uncertainty make on the forecast.

A key difference between this exercise and the previous one is that these forecasts have been run at a lower horizontal resolution. In the exercises below, it will be instructive to compare with the operational ensemble plots from the previous exercise.

Panel
bgColorwhite
titleBGColorwhite
titleTeam working

For this exercise, we suggest either each team focus on one of the above experiments and compare it with the operational ensemble. Or, each team member focus on one of the experiments and the team discuss and compare the experiments.


Panel
bgColorwhite
titleBGColorlightlightgrey
titleAvailable plot types

 The different macros available for this exercise are very similar to those in previous exercises.

Image Added

For this exercise, use the icons in the row labelled 'Experiments'. These work in a similar way to the previous exercises.

ens_exps_rmse.mv     : this will produce RMSE plumes for all the above experiments and the operational ensemble.

ens_exps_to_an.mv   : this produces 4 plots showing the ensemble spread from the OpenIFS experiments compared to the analysis.

ens_exps_to_an_spag.mv : this will produce spaghetti maps for a particular parameter contour value compared to the analysis.

ens_part_to_all.mv     : this allows the spread & mean of a subset of the ensemble members to be compared to the whole ensemble.


Info

For these tasks the Metview icons in the row labelled 'ENS' can also be used to plot the different experiments (e.g. stamp plots). Please see the comments in those macros for more details of how to select the different OpenIFS experiments.

Remember that you can make copies of the icons to keep your changes.

Task 1. RMSE plumes

Use the ens_exps_rmse.mv icon and plot the RMSE curves for the different OpenIFS experiments.

Compare the spread from the different experiments.

Panel
borderColorred
titleQuestions

The OpenIFS experiments were at a lower horizontal resolution.  How does the RMSE spread compare between the 'ens_oper' and 'ens_both' experiments?

Task 2. Ensemble spread and spaghetti plots

Use the ens_exps_to_an.mv icon and plot the ensemble spread for the different OpenIFS experiments.

Also use the ens_exps_to_an_spag.mv icon to view the spaghetti plots for MSLP for the different OpenIFS experiments.

Panel
borderColorred

Q. What is the impact of reducing the resolution of the forecasts? (hint: compare the spaghetti plots of MSLP with those from the previous exercise).
Q. How does changing the representation of uncertainty affect the spread?
Q. Which of the experiments ens_initial and ens_model gives the better spread?
Q. Is it possible to determine whether initial uncertainty or model uncertainty is more or less important in the forecast error?

If time:

  • use the ens_part_to_all.mv icon to compare a subset of the ensemble members to that of the whole ensemble. Use the stamp_map.mv icon to determine a set of ensemble members you wish to consider (note that the stamp_map icons can be used with these OpenIFS experiments. See the comments in the files).

Task 3.  What initial perturbations are important

The objective of this task is to identify what areas of initial perturbation appeared to be important for an improved forecast in the ensemble.

Using the macros provided:

  • Find an ensemble member(s) that gave a consistently improved forecast and take the difference from the control.
  • Step back to the beginning of the forecast and look to see where the difference originates from. 

Use the large geographical area for this task. Use the MSLP and z500 fields (and any others you think are useful).

Task 4. Non-linear development

Ensemble perturbations are applied in positive and negative pairs. This is done to centre the perturbations about the control forecast.

So, for each computed perturbation, two perturbed initial fields are created e.g. ensemble members 1 & 2 are a pair, where number 1 is a positive difference compared to the control and 2 is a negative difference.

  • Choose an odd & even ensemble pair (use the stamp plots). Use the appropriate icon to compute the difference of the members from the ensemble control forecast.
  • Study the development of these differences using the MSLP and wind fields. If the error growth is linear the differences will be the same but of opposite sign. Non-linearity will result in different patterns in the difference maps.
  • Repeat looking at one of the other forecasts. How does it vary between the different forecasts?

If time:

  • Plot PV at 320K. What are the differences between the forecast? Upper tropospheric differences played a role in the interaction of Hurricane Nadine and the cut-off low.

 

Appendix

Further reading

For more information on the stochastic physics scheme in (Open)IFS, see the article:

Shutts et al, 2011, ECMWF Newsletter 129.

Acknowledgements

We gratefully acknowledge the following for their contributions in preparing these exercises. From ECMWF: Glenn Carver, Sandor Kertesz, Linus Magnusson, Iain Russell, Simon Lang, Filip Vana. From ENM/Meteo-France: Frédéric Ferry, Etienne Chabot, David Pollack and Thierry Barthet for IT support at ENM. We also thank the students who have participated in the training and workshop using this material for helping to improve it!

 

...

Task 4 : PCA and clustering

...

 
So day 2 « menu » would be :
Looking at the ensemble products and the cluster products and making a decision for Hymex field campaign —>  They will have Etienne's forecaster feedback the day after. 
I’ll ask Etienne his ideas for the workshop tasks on this topic. Looking at the impact of ocean coupling on the ensemble prediction. Tell me if you manage to redo the clustering and the composites in Metview, I hope it will work. If you manage to redo figures 5 6 7 8 and 10 I think I’ll have to tell Jean-Pierre to focus more during his presentation on the vortex-vortex interaction and the CRM sensitivity experiments he made. This will leave the cluster analysis for the students to discover.
 
Hello Glenn,

Here are a few comments concerning your previous emails :

2- Véronique Ducrocq could play the role of an HyMeX operation director being the client of the students' forecast. This forecasting exercise could be done by the 8 students following the forecasting option (with me as their "teacher"), whereas the 18 others (informatic or statistic options) could keep doing more sensitivity tests while manipulating the code of the model (with Frédéric and you).
3- It would be very interesting to briefly tackle with the ECMWF Data Targeting System which was one of the observation strategies used during HyMeX SOP1. 
I precisely asked Véronique Ducrocq to speak about DTS during her presentation on Day 1.
4- ARPEGE and IFS deterministic charts are available at the French Met School between 18th and 24th sept (except the 20th runs unfortunately !). As far as I was the HyMeX forecaster myself before the 24th sept. event, I would be very interesting in the MSLP fields from the 20th 00UTC run between 25th and 28th sept. , in order to be able to illustrate (in my own Day 3 presentation) the propagation of this impressive "Gibraltar storm" I mentionned into my daily meeting report. A 6h step would be perfect, even if it is only a paper-scanned version...

 

 

Appendix

Datasets available

The following datasets are available on the Virtual Machine for this workshop:

Further reading

For more information on the stochastic physics scheme in (Open)IFS, see the article:

Shutts et al, 2011, ECMWF Newsletter 129.

Acknowledgements

We gratefully acknowledge the following for their contributions in preparing these exercises. From ECMWF: Glenn Carver, Sandor Kertesz, Linus Magnusson, Iain Russell, Simon Lang, Filip Vana. From ENM/Meteo-France: Frédéric Ferry, Etienne Chabot, David Pollack and Thierry Barthet for IT support at ENM.

 

...

Excerpt Include
Credits
Credits
nopaneltrue