Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

OpenIFS user workshop 2015

 

Preface

In these exercises we will look at a case study using a forecast ensemble. You will start by studying the evolution of the ECMWF HIRES forecast and the ECMWF ensemble forecast for this event. Then you will run your own OpenIFS forecast for a single ensemble member at lower resolutions and work in groups to study the OpenIFS ensemble forecasts.

Starting up metview

  • Type the following command in a terminal window:
Code Block
metview

Recap

Case study

St. Judes storm..... (see separate sheet?)

Key points

  • sources of uncertainty: initial analysis and model error.
  • Initial analysis uncertainty: accounted for by use of Singular Vectors (SV) and Ensemble Data Assimilation (EDA).
  • Model uncertainty: accounted for by use of stochastic processes. In IFS this means Stochastically Perturbed Physical Tendencies (SPPT) and the spectral backscatter scheme (SKEB)

ECMWF operational forecasts consist of:

  • HRES : T1279 (16km grid) highest resolution 10 day forecast
  • ENS : Ensemble (50 members), T639 for days 1-10, T319 days 11-15.

 

Exercise 1. Evaluating the ECMWF forecast

(following approach in metview training course ensemble forecast)

Task 1: Visualise operational forecast

Dates 24th - 29th.

  1. How does HIRES forecast compare to analysis?
  2. How does HIRES forecast compare to observations?

Task 2 : Visualize the ensemble

  1. Visualize ensemble mean
  2. How does the ensemble mean compare to HIRES & analyses?

Task 3 : Visualize ensemble spread

Ensemble spread is ....

  1. Visualize stamp map - are there any members that provide a better forecast?
  2. Visualize spaghetti map - see how members spread over the duration of the forecast.

Exercise 2. Creating an ensemble forecast using OpenIFS

(see separate handout?)

OpenIFS running at T319 (resolution of second leg of ECMWF's forecast ensemble).

Each participant runs one ensemble.

(possibly including Filip's coding exercise here).

At the end of this, participants will have a single member ensemble run with SPPT+SKEB enabled (model error only).

Need steps to process the data for metview - macro or grib tools?

Aim is to understand the impact of these different methods on the ensemble

(point out this is a case study and the correct approach would be to use more cases to get better statistics)

Exercise 3. Verifying / Quantifying OpenIFS forecasts

Experiments available:

  • EDA+SV+SPPT+SKEB : nagc/gbzl in MARS
  • EDA+SV only              : nagc/gc11 in MARS
  • SPPT+SKEB only       : run by participants

Question. How best to organise the experiments?  Each user has an account or use one account with multiple directories?

Tasks

  • Look at ensemble mean and spread for all 3 cases.
  • How does it vary? Which gives the better spread? How does the forecast change with reducing lead time?
  • For this case only, does the forecast improve by including model uncertainty?
  • Compute mean of -ve ensemble members and +ve ensemble members & compare with analysis. If you take the difference, is it zero? If not, why not?
  • Find an ensemble member that gives the best forecast and take the difference from the control. Step back to the beginning of the forecast and look to see where the difference originates from.  How does this differ between the 3 OpenIFS runs? (with model uncertainty only, each initial state is identical so differences will develop from 

Ensemble perturbations are applied in positive and negative pairs. For each perturbation computed, the initial fields are CNTL +/- PERT.

  • Choose an odd & even ensemble member from one of the 3 OpenIFS forecasts (e.g. members 9 and 10). For different forecast steps, compute difference of each member from the control forecast and then subtract those differences.  What is the result? Do you get zero? If not why not? Use Z200 & Z500? MSLP?
  • Repeat looking at one of the other forecasts. How does it vary between the different forecasts?

Exercise 4. CDF/RMSE at different locations

Concepts to introduce

RMSE & CDF

What to look at for RMSE & CDF?

Exercise 4. Forecast for Reading.

Introduce Brier score?

Given HIRES & OpenIFS ensembles - what would forecast be for Reading? (or something similar).

(needs explanation)

Discuss concept of ensemble reliability.

  • Choose 3 locations.e.g. Reading, Amsterdam, Copenhagen.
  • Using 10m wind and/or wind gust data plot CDF & RMSE curves using one of the OpenIFS forecasts and the ECMWF analysis data.
  • What is the difference and why?
  • Repeat for one of the other OpenIFS runs. Are there any differences, if so why?

 

NOTES

These will disappear in the final handout.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Panel
titleData & plots required
  • MSLMSLP
  • 10m winds
  • T2m
  • Z500, Z200
  • wind gust : model & obs ?precip: model & obs? << dont bother with this>>(nb. model wind gust data is accumulated)(Linus has windgust obs in geopoint format)
  • PV (decide which levels)

  • spaghetti plots
  • stamp maps
  • rmse & cdf brier score (for exercise on how to forecast for Reading)?  (maybe too muchfor MSLP at several locations (user chooses)(Linus has macros for these)
  • Difference maps : to plot fc - an and ens member - control ( or ens member(i) - ensemble member(j) )
  • step animation of spaghetti plots etc to see spread developing.stamp maps(?)
  • Linus' vortex centre & tracking plot

Retrieve data from MARS for all apart from the OpenIFS experiment the participants will run themselves.

Centre analysis at 28/10/15 12Z with +/- 3hr, 6hrs either side.

 

 

 

 

 

 

 

 

 

 

 

 

Panel
bgColorwhite
titleBGColorlightgrey
titleexercise 3

Question. How best to organise the experiments?  Each user has an account or use one account with multiple directories?

Linus suggested running a script that reorders the data to have 1 file with all ensemble members for each field of interest.

 

 

 

 

 

 

 

 

Panel
bgColorwhite
titleBGColorlightgrey
titleexercise 4

Linus explained that with the OpenIFS runs will have differing amounts of uncertainty, so the spread should noticeably change for points near the track in the analysis. This is particularly because of (a) timing error between the analysis and the (b) ensemble tracks being more to the north of the analysis track. So Amsterdram for instance should see much less spread.