Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Section
Column

Introduction

The ECMWF operational ensemble forecasts for the western Mediterranean region exhibited high uncertainty while Hurricane Nadine was slowly moving over the eastern N.Atlantic in Sept. 2012. Interaction with an Atlantic cut-off low produced a bifurcation in the ensemble and significant spread, which controls both the track of Hurricane Nadine and the synoptic conditions downstream.

The HyMEX (Hydrological cycle in Mediterranean eXperiment) field campaign was also underway and forecast uncertainty was a major issue for planning observations during the first special observations period of the campaign.

This interesting case study examines the forecasts in the context of the interaction between Nadine and the Atlantic cut-off low in the context of ensemble forecasting. It will explore the scientific rationale for using ensemble forecasts, why they are necessary and they can be interpreted, particularly in a "real world" situation of forecasting for a observational field campaign.

 

Panel
titleThis case study is based on the following paper which is recommended reading

Pantillon, F., Chaboureau, J.-P. and Richard, E. (2015), 'Vortex-vortex interaction between Hurricane Nadine and an Atlantic cutoff dropping the predictability over the Mediterranean,   http://onlinelibrary.wiley.com/doi/10.1002/qj.2635/abstract

In this case study

In the exercises for this interesting case study we will:

  • study the development of Hurricane Nadine and the interaction with the Atlantic cut-off low using the ECMWF analyses.
  • study the performance of the ECMWF high resolution (HRES) deterministic forecast of the time.
  • use the operational ensemble forecast to look at the forecast spread and understand the uncertainty downstream of the interaction.
  • compare a reforecast using the current (May/2016) ECMWF operational ensemble with the 2012 ensemble forecasts.
  • use principal component analysis (PCA) with clustering techniques (see Pantillon et al) to characterize the behaviour of the ensembles.
  • see how forecast products were used during the HyMEX field campaign.
Column
width20%25%
Panel

Table of contents

Table of Contents
maxLevel1

...

Panel
borderColorred
titleQuestions

1. What differences can be seen?

2. How well did the forecast position the Hurricane and cut-off N.Atlantic low?

Look If time: look at other fields to study the forecast. For example, jet position, total precipitation (tp)., PV (320K)

Task 3: Precipitation over France

...

Panel
bgColorlightlightblue
titleBGColorlightblue
titleVertical cross-sections
Potential temperature + potential vorticity, and 
Humidity and vertical motion :                                  to characterize the cold core and warm core structures of Hurricane Nadine and the cut-off low.
Info
iconfalse

 (tick)  This completes the second exercise.

You have seen how the ECMWF operational HRES forecast of 2012-09-20 00Z performed compared to the analysis. The next exercises look at the ECMWF ensemble.

Exercise 3 : The operational ensemble forecasts

Recap

  • ECMWF operational ensemble forecasts treat uncertainty in both the initial data and the model.
  • Initial analysis uncertainty: sampled by use of Singular Vectors (SV) and Ensemble Data Assimilation (EDA) methods. Singular Vectors are a way of representing the fastest growing modes in the initial state.
  • Model uncertainty: sampled by use of stochastic parametrizations In IFS this means Stochastically Perturbed Physical Tendencies (SPPT) and the spectral backscatter scheme (SKEB)
  • Ensemble mean : the average of all the ensemble members. Where the spread is high, small scale features can be smoothed out in the ensemble mean.
  • Ensemble spread : the standard deviation of the ensemble members and represents how different the members are from the ensemble mean.

Ensemble exercise tasks

This exercise has more tasks than the previous ones.

Visualising ensemble forecasts can be done in various ways. During this exercise, in order to understand the errors and uncertainties in the forecast, we will use a number of visualisation techniques.

 

General questions

Panel
  1. How does the ensemble mean 10m wind fields and MSLP compare to the HRES forecast and analysis?
  2. Examine the initial diversity in the ensemble and how the ensemble spread and error growth develops.  What do the extreme forecasts look like?
  3. Are there any members that consistently provide a better forecast? Can you identify the members close to observations/analysis both from a qualitative and quantitative approach?

Available plot types

Panel

Image Removed

For these exercises please use the Metview icons in the row labelled 'ENS'.

ens_rmse.mv : this is similar to the oper_rmse.mv in the previous exercise. It will plot the root-mean-square-error growth for the ensemble forecasts.

ens_to_an.mv : this will plot (a) the mean of the ensemble forecast, (b) the ensemble spread, (c) the HRES deterministic forecast and (d) the analysis for the same date.

ens_to_an_runs_spag.mv : this plots a 'spaghetti map' for a given parameter for the ensemble forecasts compared to the analysis. Another way of visualizing ensemble spread.

stamp.mv : this plots all of the ensemble forecasts for a particular field and lead time. Each forecast is shown in a stamp sized map. Very useful for a quick visual inspection of each ensemble forecast.

stamp_diff.mv : similar to stamp.mv except that for each forecast it plots a difference map from the analysis. Very useful for quick visual inspection of the forecast differences of each ensemble forecast.

 

Additional plots for further analysis:

Image Removed

pf_to_cf_diff.mv : this useful macro allows two individual ensemble forecasts to be compared to the control forecast. As well as plotting the forecasts from the members, it also shows a difference map for each.

ens_to_an_diff.mv : this will plot the difference between an ensemble forecast member and the analysis for a given parameter.

Getting started

Panel
bgColorwhite
titleBGColorwhite
borderStyledotted
titleStorm track printed handout: ens_oper

Please refer to the handout showing the storm tracks labelled 'ens_oper' during this exercise. It is provided for reference and may assist interpreting the plots.

Each page shows 4 plots, one for each starting forecast lead time. The position of the symbols represents the centre of the storm valid 28th Oct 2013 12UTC. The colour of the symbols is the central pressure.

The actual track of the storm from the analysis is shown as the red curve with the position at 28th 12Z highlighted as the hour glass symbol. The HRES forecast for the ensemble is shown as the green curve and square symbol. The lines show the 12hr track of the storm; 6hrs either side of the symbol.

Note the propagation speed and direction of the storm tracks.  The plot also shows the centres of the barotropic low to the North.

Q. What can be deduced about the forecast from these plots?

Info

The plots in the handout can also be found in the 'pics' folder.

Task 1: RMSE "plumes"

This is similar to task 1 in exercise 2, except now the RMSE curves for all the ensemble members from a particular forecast will be plotted. All 4 forecast dates are shown.

Using the ens_rmse.mv icon, right-click, select 'Edit' and plot the curves for 'mslp'. Note this is only for the European region. The option to plot over the larger geographical region is not available.

Q. What features can be noted from these plumes?
Q. How do these change with different forecast lead times?

Note there appear to be some forecasts that give a lower RMS error than the control forecast. Bear this in mind for the following tasks.

If time

  • Explore the plumes from other variables.
  • Do you see the same amount of spread in RMSE from other pressure levels in the atmosphere?

Task 2: Ensemble spread

In the previous task, we have seen that introducing uncertainty into the forecast by starting from different initial conditions and enabling the stochastic parameterizations in IFS can result in significant differences in the RMSE (for this particular case and geographical region).

The purpose of this task is to explore the difference in more detail and look in particular at the 'ensemble spread'.

Refer to the storm track plots in the handout in this exercise.

Use the ens_to_an.mv icon and plot the MSLP and wind fields. This will produce plots showing: the mean of  all the ensemble forecasts, the spread of the ensemble forecasts, the operational HRES deterministic forecast and the analysis.

Q. How does the mean of the ensemble forecasts compare to the HRES & analysis?
Q. Does the ensemble spread capture the error in the forecast?
Q. What other comments can you make about the ensemble spread?

If time:

  • change the 'run=' value to look at the mean and spread for other forecast lead times.
  • set the 'members=' option to change the number of members in the spread plots.
    e.g. try a "reduced" ensemble by only using the first 5 ensemble members: "members=[1,2,3,4,5]".

Task 3: Spaghetti plots - another way to visualise spread

A "spaghetti" plot is where a single contour of a parameter is plotted for all ensemble members. It is another way of visualizing the differences between the ensemble members and focussing on features.

Use the ens_to_an_runs_spag.mv icon. Plot and animate the MSLP field using the default value for the contour level. This will indicate the low pressure centre. Note that not all members may reach the low pressure set by the contour.

Note that this macro may animate slowly because of the computations required.

Experiment with changing the contour value and (if time) plotting other fields.

Task 4: Visualise ensemble members and difference

So far we have been looking at reducing the information in some way to visualise the ensemble.

To visualise all the ensemble members as normal maps, we can use stamp maps. These are small, stamp sized contour maps plotted for each ensemble member using a small set of contours.

There are two icons to use, stamp.mv and stamp_diff.mv. Plot the MSLP parameter for the ensemble. Repeat for wind field.

Q. Using the stamp and stamp difference maps, study the ensemble. Identify which ensembles produce "better" forecasts.
Q. Can you see any distinctive patterns in the difference maps? Are the differences similar in some way?

If time:

Use the macros to see how the perturbations are evolving; use ens_to_an_diff.mv to compare individual members to the analyses.

Find ensemble members that appear to produce a better forecast and look to see how the initial development in these members differs. Start by using a single lead time and examine the forecast on the 28th.

  • Select 'better' forecasts using the stamp plots and use ens_to_an.mv to modify the list of ensembles plots. Can you tell which area is more sensitive in the formation of the storm?
  • use the pf_to_cf_diff macro to take the difference between these perturbed ensemble member forecasts from the control to also look at this.
Info

Use 'mapType=1' to see the larger geographical area (please note that due to data volume restrictions, this mapType only works for the MSLP parameter).

Task 5:  Cumulative distribution function at different locations

Recap

The probability distribution function of the normal distribution
or Gaussian distribution. The probabilities expressed as a
percentage for various widths of standard deviations (σ)
represent the area under the curve.

Image Removed

Figure from Wikipedia.

Cumulative distribution function for a normal
distribution with varying standard deviation (σ)

Image Removed

Figure from Wikipedia.

Cumulative distribution function (CDF)

The figures above illustrate the relationship between a normal distribution and its associated cumulative distribution function.The CDF is constructed from the area under the probability density function.

The CDF gives the probability that a value on the curve will be found to have a value less than or equal to the corresponding value on the x-axis. For example, in the figure, the probability for values less than or equal to X=0 is 50%.

The shape of the CDF curve is related to the shape of the normal distribution. The width of the CDF curve is directly related to the value of the standard deviation of the probability distribution function. For our ensemble, the width is then related to the 'ensemble spread'.

For a forecast ensemble where all values were the same, the CDF would be a vertical straight line.

Plot the CDF for 3 locations

Image Removed

This exercise uses the cdf.mv icon. Right-click, select 'Edit' and then:

...

Horizontal maps (analysis + forecast) :
1 : Geopotential + temperature at 500 hPa --> mid troposphere localization of the cold cutoff and the warm Nadine. On the deterministic forecast we should not see Nadine and the cutoff "meeting", with Nadine moving eastward.
2 : MSLP + relative humidity at 700hPa + 850 absolute vorticity --> Signatures at low levels of Nadine and disturbance associated with the cutoff low.  Mid level humidity of the systems.
3 : Equivalent potential temperature at 850 hPa  + winds at 850 hPa + vertical velocity at 600hPa + MSLP in background --> focussing on the moist and warm air in the lower levels and the vertival motion. There should not be a strong horizontal temperature gradient around Nadine, the winds should be stronger for Nadine than for the cutoff.
4 : 10meter winds + 6hourly RR + MSLP in background --> We should see an impact on the RR over France around t+108h (cf Pantillon fig 2)
5 : 330K PV + 330K winds + MSLP in background (fig 13 in Pantillon) --> Interaction between Nadine and the through.
Vertical x-sections in the cutoff and in the low :
PV + winds (preferably normal winds) + if possible potential temperature --> to look at the cold core or warm core structure of the systems on the vertical and the signature in PV and winds.
PV + relative humidity + vertical velocity --> a more classical x-section that we use to see if a PV anomaly is accompanied with vertical motion or not.
For these x-sections we can choose 3 or 4 times that appear to be interesting. Interactivity would be good, to make the students look a little bit in the code.

 

Info
iconfalse

 (tick)  This completes the second exercise.

You have seen how the ECMWF operational HRES forecast of 2012-09-20 00Z performed compared to the analysis. The next exercises look at the ECMWF ensemble.

Exercise 3 : The operational ensemble forecasts

Recap

  • ECMWF operational ensemble forecasts treat uncertainty in both the initial data and the model.
  • Initial analysis uncertainty: sampled by use of Singular Vectors (SV) and Ensemble Data Assimilation (EDA) methods. Singular Vectors are a way of representing the fastest growing modes in the initial state.
  • Model uncertainty: sampled by use of stochastic parametrizations In IFS this means Stochastically Perturbed Physical Tendencies (SPPT) and the spectral backscatter scheme (SKEB)
  • Ensemble mean : the average of all the ensemble members. Where the spread is high, small scale features can be smoothed out in the ensemble mean.
  • Ensemble spread : the standard deviation of the ensemble members and represents how different the members are from the ensemble mean.

Ensemble exercise tasks

This exercise has more tasks than the previous ones.

Visualising ensemble forecasts can be done in various ways. During this exercise, in order to understand the errors and uncertainties in the forecast, we will use a number of visualisation techniques.

 

General questions

Panel
  1. How does the ensemble mean 10m wind fields and MSLP compare to the HRES forecast and analysis?
  2. Examine the initial diversity in the ensemble and how the ensemble spread and error growth develops.  What do the extreme forecasts look like?
  3. Are there any members that consistently provide a better forecast? Can you identify the members close to observations/analysis both from a qualitative and quantitative approach?

Available plot types

Panel

Image Added

For these exercises please use the Metview icons in the row labelled 'ENS'.

ens_rmse.mv : this is similar to the oper_rmse.mv in the previous exercise. It will plot the root-mean-square-error growth for the ensemble forecasts.

ens_to_an.mv : this will plot (a) the mean of the ensemble forecast, (b) the ensemble spread, (c) the HRES deterministic forecast and (d) the analysis for the same date.

ens_to_an_runs_spag.mv : this plots a 'spaghetti map' for a given parameter for the ensemble forecasts compared to the analysis. Another way of visualizing ensemble spread.

stamp.mv : this plots all of the ensemble forecasts for a particular field and lead time. Each forecast is shown in a stamp sized map. Very useful for a quick visual inspection of each ensemble forecast.

stamp_diff.mv : similar to stamp.mv except that for each forecast it plots a difference map from the analysis. Very useful for quick visual inspection of the forecast differences of each ensemble forecast.

 

Additional plots for further analysis:

Image Added

pf_to_cf_diff.mv : this useful macro allows two individual ensemble forecasts to be compared to the control forecast. As well as plotting the forecasts from the members, it also shows a difference map for each.

ens_to_an_diff.mv : this will plot the difference between an ensemble forecast member and the analysis for a given parameter.

Getting started

Panel
bgColorwhite
titleBGColorwhite
borderStyledotted
titleStorm track printed handout: ens_oper

Please refer to the handout showing the storm tracks labelled 'ens_oper' during this exercise. It is provided for reference and may assist interpreting the plots.

Each page shows 4 plots, one for each starting forecast lead time. The position of the symbols represents the centre of the storm valid 28th Oct 2013 12UTC. The colour of the symbols is the central pressure.

The actual track of the storm from the analysis is shown as the red curve with the position at 28th 12Z highlighted as the hour glass symbol. The HRES forecast for the ensemble is shown as the green curve and square symbol. The lines show the 12hr track of the storm; 6hrs either side of the symbol.

Note the propagation speed and direction of the storm tracks.  The plot also shows the centres of the barotropic low to the North.

Q. What can be deduced about the forecast from these plots?

Info

The plots in the handout can also be found in the 'pics' folder.

Task 1: RMSE "plumes"

This is similar to task 1 in exercise 2, except now the RMSE curves for all the ensemble members from a particular forecast will be plotted. All 4 forecast dates are shown.

Using the ens_rmse.mv icon, right-click, select 'Edit' and plot the curves for 'mslp'. Note this is only for the European region. The option to plot over the larger geographical region is not available.

Q. What features can be noted from these plumes?
Q. How do these change with different forecast lead times?

Note there appear to be some forecasts that give a lower RMS error than the control forecast. Bear this in mind for the following tasks.

If time

  • Explore the plumes from other variables.
  • Do you see the same amount of spread in RMSE from other pressure levels in the atmosphere?

Task 2: Ensemble spread

In the previous task, we have seen that introducing uncertainty into the forecast by starting from different initial conditions and enabling the stochastic parameterizations in IFS can result in significant differences in the RMSE (for this particular case and geographical region).

The purpose of this task is to explore the difference in more detail and look in particular at the 'ensemble spread'.

Refer to the storm track plots in the handout in this exercise.

Use the ens_to_an.mv icon and plot the MSLP and wind fields. This will produce plots showing: the mean of  all the ensemble forecasts, the spread of the ensemble forecasts, the operational HRES deterministic forecast and the analysis.

Q. How does the mean of the ensemble forecasts compare to the HRES & analysis?
Q. Does the ensemble spread capture the error in the forecast?
Q. What other comments can you make about the ensemble spread?

If time:

  • change the 'run=' value to look at the mean and spread for other forecast lead times.
  • set the 'members=' option to change the number of members in the spread plots.
    e.g. try a "reduced" ensemble by only using the first 5 ensemble members: "members=[1,2,3,4,5]".

Task 3: Spaghetti plots - another way to visualise spread

A "spaghetti" plot is where a single contour of a parameter is plotted for all ensemble members. It is another way of visualizing the differences between the ensemble members and focussing on features.

Use the ens_to_an_runs_spag.mv icon. Plot and animate the MSLP field using the default value for the contour level. This will indicate the low pressure centre. Note that not all members may reach the low pressure set by the contour.

Note that this macro may animate slowly because of the computations required.

Experiment with changing the contour value and (if time) plotting other fields.

Task 4: Visualise ensemble members and difference

So far we have been looking at reducing the information in some way to visualise the ensemble.

To visualise all the ensemble members as normal maps, we can use stamp maps. These are small, stamp sized contour maps plotted for each ensemble member using a small set of contours.

There are two icons to use, stamp.mv and stamp_diff.mv. Plot the MSLP parameter for the ensemble. Repeat for wind field.

Q. Using the stamp and stamp difference maps, study the ensemble. Identify which ensembles produce "better" forecasts.
Q. Can you see any distinctive patterns in the difference maps? Are the differences similar in some way?

If time:

Use the macros to see how the perturbations are evolving; use ens_to_an_diff.mv to compare individual members to the analyses.

Find ensemble members that appear to produce a better forecast and look to see how the initial development in these members differs. Start by using a single lead time and examine the forecast on the 28th.

  • Select 'better' forecasts using the stamp plots and use ens_to_an.mv to modify the list of ensembles plots. Can you tell which area is more sensitive in the formation of the storm?
  • use the pf_to_cf_diff macro to take the difference between these perturbed ensemble member forecasts from the control to also look at this.
Info

Use 'mapType=1' to see the larger geographical area (please note that due to data volume restrictions, this mapType only works for the MSLP parameter).

Task 5:  Cumulative distribution function at different locations

Recap

The probability distribution function of the normal distribution
or Gaussian distribution. The probabilities expressed as a
percentage for various widths of standard deviations (σ)
represent the area under the curve.

Image Added

Figure from Wikipedia.

Cumulative distribution function for a normal
distribution with varying standard deviation (σ)

Image Added

Figure from Wikipedia.

Cumulative distribution function (CDF)

The figures above illustrate the relationship between a normal distribution and its associated cumulative distribution function.The CDF is constructed from the area under the probability density function.

The CDF gives the probability that a value on the curve will be found to have a value less than or equal to the corresponding value on the x-axis. For example, in the figure, the probability for values less than or equal to X=0 is 50%.

The shape of the CDF curve is related to the shape of the normal distribution. The width of the CDF curve is directly related to the value of the standard deviation of the probability distribution function. For our ensemble, the width is then related to the 'ensemble spread'.

For a forecast ensemble where all values were the same, the CDF would be a vertical straight line.

Plot the CDF for 3 locations

Image Added

This exercise uses the cdf.mv icon. Right-click, select 'Edit' and then:

  • Plot the CDF of MSLP for the 3 locations listed in the macro.e.g. Reading, Amsterdam, Copenhagen. 
  • If time, change the forecast run date and compare the CDF for the different forecasts.

Q. What is the difference between the different stations and why? (refer to the ensemble spread maps to answer this)
Q. How does the CDF for Reading change with different forecast lead (run) dates?

 Forecasting an event using an ensemble : Work in teams for group discussion

Ensemble forecasts can be used to give probabilities to a forecast issued to the public.

Panel
titleForecast for HyMEX
To be done...

 

 

Notes from Frederic: email 7/4/16

Q. What is the difference between the different stations and why? (refer to the ensemble spread maps to answer this)
Q. How does the CDF for Reading change with different forecast lead (run) dates?

 Forecasting an event using an ensemble : Work in teams for group discussion

Ensemble forecasts can be used to give probabilities to a forecast issued to the public.

Panel
titleForecast for HyMEX
To be done...

 

 

 

 

  • Plot and animate MSL + 500hPa maps showing track of Nadine
  • > 1 : Nadine MSLP and T2m (or better SST) tracking 15-20 september

    > 2 : Satellite views on the 20th (provided by Etienne, if possible to put on the VM)

    > 3 : Studying of the horizontal maps (analysis + forecasts)

    > 4 : Studying and building of the vertical x-sections (analysis + forecasts)

     

Notes from Frederic: email 7/4/16

day 1

2) Metview Plots :

 

day 2


1) T1279 Analysis 0920 + t+96 deterministic forecast 0924 (t+96h) --> focusing on the interaction between Nadine and the cutoff. Maybe an extra plot of the forecasted rainfall at t+96 over France ?

...

Etienne's presentation in the morning
SCM experiments
I suggest to focus on the period before the 20-25 September and to study
Nadine and the cutoff, not yet at the impact o the Mediteranean area (we
leave that for later)
Agreed. The first exercise will be to examine the track & changes in the storm using the analyses.
 
*Here are some inputs concerning Day2 and Nadine's study on day 2:
*1)* T1279 Analysis 0920 + t+96 deterministic forecast 0924 (t+96h) -->
focusing on the interaction between Nadine and the cutoff. Maybe an
extra plot of the forecasted rainfall at t+96 over France ?
Agreed.
*2)* Ens T639 forecasts : I saw that T639 is the 2012 operational
ensemble resolution, so we will see the same bifurcation in the
scenarios as explained in Pantillon : the visualization of the spread,
the plumes, the spaghettis, ... will help here. I am sure you have great
ideas on this topic. Maybe we can propose some horizontal maps of each
(or some) members ?
I think the exercises we used last year will fit well here. I will start drafting the exercises on the wiki and ask you to help and comment.
I presume the exercises should be in English? Or should we do a side-by-side English/French version?
*3)* PCA and clustering : if you manage to put it in Metview this will
be great lo look at the 2 distinct patterns. I asked Florian Pantillon
his NCL sources to do the trick. I'll use it to build an extra NCL
exercice with PCA, clustering and compositing, if we have time. The file
format needed will be netcdf.
We think our PCA code can be used to reproduce fig 5.  We thought we could also reproduce fig 6 but instead of dots plot the ensemble number. Then the students can build the clusters (Fig. 7) but grouping the ensemble members together?
We (here) need to try this and see how far we can reproduce the rest of the figures in the paper.
*4)* Ensemble runs : initial (EDA+SV) and model (SPPT+SKEB) : same as
last year
SST experiment might be too much, except if we shorten the ensemble study...See above. My preference after talking with people here is to use the comparison between 2012 operational ensemble and 2016 operational ensemble. The lower res (T319) ensembles; control (EDA+SV), (SPPT_SKEB) ensembles for this case are running now and we can include the data (as long as filesize does not become an issue). But honestly, I do not think there will be time. I will leave it to you to decide!
We counted 7hrs total for the practicals (not including the SCM). Part of that time the students will need to prepare some plots for the discussion on Friday.
I am concerned about the time available. Perhaps my talk on weds 9.30-10 could be shortened to 15mins.
*Day 3
SCM experiments
For the SCM we thought that it might be interesting to use the SCM for a point near Toulouse that experienced very heavy rainfall during HYMEX. Then we get the students to adjust the entrainment rates  (similar to the convection exercises here) to see what impact it has on the precipitation?

Exercise 2.

Task 1 : forecast error
Task 2 : compare forecast to analysis
Task 3 : visualize ensembles (plumes, ensemble spread, spaghetti, stamp, CDO)
--> These 3 tasks from last year are very interesting. To gain time maybe that we should put a group on each item for task 3 or suppress task 2 ? The CDO adds a "statistical" taint to the workshop do you think we can adapt it to our case ?
Task 4 : PCA and clustering
If not possible in Metview I can make the students plot with NCL figures 5, 6 and 7 from Pantillon.
Figure 5 shows that EOF1 accounts for 3/4 of the variance. This dipole pattern is typical when tropical interact with mid latitudes. No need to spend a lot of time on this.
Figure 6 is much more interesting. It Allows to see that we can choose 2 clusters containing approximately half of the members. The deterministic forecast is close to the two outliers and the control and the analysis belong to cluster 1.
From Figure 7 we see that cluster 1 corresponds to a cutoff moving eastward over Europe and cluster 2 to a weak ridge over western Europe.
It would be great if we could also do the cluster composite of rainfall from Figure 8 : cluster 1 shows impact on precipitation over The Cévènes whereas cluster 2 shows weak precipitation over the Cévènes.
The plot of the cluster member tracks of Nadine and the cutoff from figure 10 is also very interesting to me, I think we should do it.  We see more clearly that cluster 1 exhibits a weak interaction between cutoff and low and cutoff over Europe. In cluster 2, there is a strong interaction between the cutoff and Nadine and Nadine makes landfall over the Iberian penisula (in model world, is it realistic ?). I don't know if the tracking is easy to do in Metview as it implies to track the cutoff and the low for each member.
Like you said in a previous mail, there is a possibility of interactivity for figures 7 (MSLP and Z500 composites) 8 (wind and RR composite) and 10 (member track). We have to identify by a number the cluster members and if make the students group the members to create the cluster composites. I think it is a good idea.
Task 5 : Sensitivity experiments to the SST coupling

As I am writing I am beginning to wonder if we should not make 2 groups : one for task 4 and one for task 5. Tasks 1-3 would be for all students. This would allow to keep the CDO task. What do you think ?

We discussed today with Etienne to propose a more detailled list of the requested fields and data for the Metview practice on the first day (2h30).
T1279 Analysis : 20121020 0000UTC to 20121025 0000UTC --> Only the 20 september analysis will be looked at but I assume you need to get the other analysis to compute the RMSE in day 2 ?
T1279 Deterministic forecast : 20 000UTC analysis and 20121025 0000 UTC deterministic forecast (t+120).
Extended analysis : 15-20 September just for MSLP and T2m (or better the SST) --> Nadine tracking before the 20th
Extended deterministic forecast : 20-28 September just for MSLP : Etienne told me that the ECMWF model of the 20 000UTC proposed a very extreme situation on the 28th, with a storm over Gibraltar. This would be a way to illustrate the limits of a deterministic approach.
For analysis and forecasts, 6 hourly data is OK if you have data size issues.
Horizontal maps (analysis + forecast) :
1 : Geopotential + temperature at 500 hPa --> mid troposphere localization of the cold cutoff and the warm Nadine. On the deterministic forecast we should not see Nadine and the cutoff "meeting", with Nadine moving eastward.
2 : MSLP + relative humidity at 700hPa + 850 absolute vorticity --> Signatures at low levels of Nadine and disturbance associated with the cutoff low.  Mid level humidity of the systems.
3 : Equivalent potential temperature at 850 hPa  + winds at 850 hPa + vertical velocity at 600hPa + MSLP in background --> focussing on the moist and warm air in the lower levels and the vertival motion. There should not be a strong horizontal temperature gradient around Nadine, the winds should be stronger for Nadine than for the cutoff.
4 : 10meter winds + 6hourly RR + MSLP in background --> We should see an impact on the RR over France around t+108h (cf Pantillon fig 2)
5 : 330K PV + 330K winds + MSLP in background (fig 13 in Pantillon) --> Interaction between Nadine and the through.
Vertical x-sections in the cutoff and in the low :
PV + winds (preferably normal winds) + if possible potential temperature --> to look at the cold core or warm core structure of the systems on the vertical and the signature in PV and winds.
PV + relative humidity + vertical velocity --> a more classical x-section that we use to see if a PV anomaly is accompanied with vertical motion or not.
For these x-sections we can choose 3 or 4 times that appear to be interesting. Interactivity would be good, to make the students look a little bit in the code.
Tephigrams : We are not used to tephigrams at Meteo France, we use Emagrams. So we think it will be less confusing for the students if Etienne shows them some emagrams (observed and forecasted) on the last day.
Satellite : we have the satellite images of the situation (IR, WV, cloud classification, IR-Visible composite). We can send them to you to put on the VM.
Proposed tasks for Day 1 :
1 : Nadine MSLP and T2m (or better SST) tracking 15-20 september
2 : Satellite views on the 20th (provided by Etienne, if possible to put on the VM)
3 : Studying of the horizontal maps (analysis + forecasts)
4 : Studying and building of the vertical x-sections (analysis + forecasts)
5 : Beyond D+5 deterministic scenario : MSLP only
Concerning the ensemble runs, 6 hourly data is OK. If you have space on the VM it would be interesting to go up to D+10 (or D+15). This would allow to try and look at the extreme member over Gibraltar on the 28 September.

...



*2)* Ens T639 forecasts : I saw that T639 is the 2012 operational
ensemble resolution, so we will see the same bifurcation in the
scenarios as explained in Pantillon : the visualization of the spread,
the plumes, the spaghettis, ... will help here. I am sure you have great
ideas on this topic. Maybe we can propose some horizontal maps of each
(or some) members ?

I think the exercises we used last year will fit well here. I will start drafting the exercises on the wiki and ask you to help and comment.

*3)* PCA and clustering : if you manage to put it in Metview this will
be great lo look at the 2 distinct patterns. I asked Florian Pantillon
his NCL sources to do the trick. I'll use it to build an extra NCL
exercice with PCA, clustering and compositing, if we have time. The file
format needed will be netcdf.

We think our PCA code can be used to reproduce fig 5.  We thought we could also reproduce fig 6 but instead of dots plot the ensemble number. Then the students can build the clusters (Fig. 7) but grouping the ensemble members together?


*4)* Ensemble runs : initial (EDA+SV) and model (SPPT+SKEB) : same as
last year

SST experiment might be too much, except if we shorten the ensemble study...

See above. My preference after talking with people here is to use the comparison between 2012 operational ensemble and 2016 operational ensemble. The lower res (T319) ensembles; control (EDA+SV), (SPPT_SKEB) ensembles for this case are running now and we can include the data (as long as filesize does not become an issue). But honestly, I do not think there will be time. I will leave it to you to decide!
 

*Day 3
SCM experiments

For the SCM we thought that it might be interesting to use the SCM for a point near Toulouse that experienced very heavy rainfall during HYMEX. Then we get the students to adjust the entrainment rates  (similar to the convection exercises here) to see what impact it has on the precipitation?

Exercise 2.

Task 1 : forecast error
Task 2 : compare forecast to analysis
Task 3 : visualize ensembles (plumes, ensemble spread, spaghetti, stamp, CDF)
--> These 3 tasks from last year are very interesting. To gain time maybe that we should put a group on each item for task 3 or suppress task 2 ? The CDF adds a "statistical" taint to the workshop do you think we can adapt it to our case ?
Task 4 : PCA and clustering
If not possible in Metview I can make the students plot with NCL figures 5, 6 and 7 from Pantillon.
Figure 5 shows that EOF1 accounts for 3/4 of the variance. This dipole pattern is typical when tropical interact with mid latitudes. No need to spend a lot of time on this.
Figure 6 is much more interesting. It Allows to see that we can choose 2 clusters containing approximately half of the members. The deterministic forecast is close to the two outliers and the control and the analysis belong to cluster 1.
From Figure 7 we see that cluster 1 corresponds to a cutoff moving eastward over Europe and cluster 2 to a weak ridge over western Europe.
It would be great if we could also do the cluster composite of rainfall from Figure 8 : cluster 1 shows impact on precipitation over The Cévènes whereas cluster 2 shows weak precipitation over the Cévènes.
The plot of the cluster member tracks of Nadine and the cutoff from figure 10 is also very interesting to me, I think we should do it.  We see more clearly that cluster 1 exhibits a weak interaction between cutoff and low and cutoff over Europe. In cluster 2, there is a strong interaction between the cutoff and Nadine and Nadine makes landfall over the Iberian penisula (in model world, is it realistic ?). I don't know if the tracking is easy to do in Metview as it implies to track the cutoff and the low for each member.
Like you said in a previous mail, there is a possibility of interactivity for figures 7 (MSLP and Z500 composites) 8 (wind and RR composite) and 10 (member track). We have to identify by a number the cluster members and if make the students group the members to create the cluster composites. I think it is a good idea.
Task 5 : Sensitivity experiments to the SST coupling

Extended deterministic forecast : 20-28 September just for MSLP : Etienne told me that the ECMWF model of the 20 000UTC proposed a very extreme situation on the 28th, with a storm over Gibraltar. This would be a way to illustrate the limits of a deterministic approach.

Satellite : we have the satellite images of the situation (IR, WV, cloud classification, IR-Visible composite). We can send them to you to put on the VM.

.

 

Before leaving for a long weekend and maybe more, here is some input about the practical session on the 2nd day :

...