Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Section


Column


 

Panel
borderColorblue
bgColor#e6ffff
borderWidth1
borderStylesolid

This tutorial explains how to create a new experiment from an existing one by copying the initial files and setting up the experiment directory.

ECMWF HPCF

ECMWF operates a Cray based High Performance Computing Facility (HPCF). It consists of two identical Cray XC40 clusters with their own storage, but with equal access to the high performance working storage of the other cluster. This provides the benefit of having one very large system but the dual clusters add significantly to the resiliency of the system.

For more details about the ECMWF HPCF, please see: http://www.ecmwf.int/en/computing/our-facilities/supercomputer

All the OpenIFS experiments will make use of the ECMWF HPCF. During the workshop you will login to the 'front-end' machines and submit 'batch job's to the HPCF.

The scripts described below are provided to make preparing and submitting ensemble batch jobs easier. Useful commands are listed at the end of this tutorial.


Column
width40%


Panel
borderStyledotted
titleOn this page...

Table of Contents



Login to ECMWF Cray High Performance Computing Facility (HPCF)

Each group or participant will have a training user account on the ECMWF system, beginning with 'troifs'. This is different from the user account on the classroom computers.

...

Code Block
languagebash
ssh troifsXX@ecgb06troifsXX@ecaccess.ecmwf.int		               # substitute your user id for XX
troifsXX@ecgb06troifsXX@ecaccess.ecmwf.int's password:              # give your password when prompted using the securID hardware token (one time password)

Next login to the Cray login nodesWhen prompted select 'cca' :

Code Block
languagebash
ssh cca
troifs@cca-login2:~> 
troifs1@140.105.20.128's password: 
Select hostname (ecgate, cca, ccb) [ecgate]: cca

'cca' is the "front-end" computer to the ECMWF Cray high performance computer facility.

The contents of the account should look like:

...

Warning

Please do not store large files in your home directory on the gateway or HPCF login.  All OpenIFS experiments and large files should be stored under 'scratch'. The command quota can be used to determine available space.

Creating the experiment initial files

This section will explain how to copy and change the experiment id.

...

You do not need to run these experiments, the forecasts from a 10 member ensemble are made available on the classroom computers.

Copy previous experiment initial files

Before creating the experiment, we first need to create the initial data. If you plan to rerun the ob00 or clim experiments and use the initial data 'as-is' you can skip this step.

...

Choose a starting date from the range 1st Nov to 15th Nov (00Z)

Make a copy of the ob00 data to your /scratch directory:

Code Block
languagebash
titleCopy previous initial data from previous experiment (choose a date from 1-15th Nov)
cd scratch				# starting from your home directory
mkdir -p inidir/ob01 	# ourinidir            # put all your personal initial data in here
cd inidir
mkdir ob01 	            # your new experiment id
cd inidir/ob01
cp -rL /perm/rd/openifs/oifs_workshop_2017/expts-inidata/ob00/2015110100 .		# those last characters are a 'space' and then a 'fullstop', '.' means "here".

...

Ignore any errors about files not being copies copied because of permission problems.

Code Block
languagebash
titleTo copy multiple dates. Example 1: first 5 days
cd inidir/ob01
cp -rL /perm/rd/openifs/oifs_workshop_2017/expts-inidata/ob00/2015110[1-5]* .     # this will take some time

Please only copy the initial dates you intend to use. Each date uses 0.5 Gbyte of file storage.

Code Block
languagebash
titleTo copy multiple dates. Example 2: copy all dates
cd inidir/ob01
cp -rL /perm/rd/openifs/oifs_workshop_2017/expts-inidata/ob00/201511* .   # this will be slow!

Change experiment id

Change experiment id

OpenIFS forecasts are OpenIFS forecasts are identified by an 'experiment id', a four letter string.

An experiment could consist of a single forecast date or multiple starting dates. It can be of any length and could also include a restarted forecast. However, an experiment only has one horizontal and vertical resolution.

Initial files

Info

It might help to put a README file in the experiment directory to remember what the aim of each experiment is.

If you are using multiple dates, remember to change the experiment id for each date.

Initial files

The initial files containing The initial files containing 2D and 3D fields to start the forecast are contained in file that beging begin with ICM* : ICMGG* are the initial gridpoint files, ICMSH* are the initial spectral fields. ICMCL* is the file containing the climatological forcing fields.

...

Code Block
languagebash
grib_ls ICMGGob00INIT
grib_ls ICMSHob00INIT

OpenIFS namelist

The file: fort.4 is the model 'NAMELIST'. It contains a list of variable settings or 'switches' that control what the model does. These variables are grouped into separate fortran namelists.

...

The wave model, WAM, also has a separate namelist. For this workshop, none of the switches in this file needs to be changed.

Listing the experiment id

The command 'exptid' can be used to check and change the experiment id in the GRIB files.

...

Note that the ICMCL and ICMGG*INIT files both have a experimentVersionNumber key of '0001'. This is used for climatological fields.

Set new experiment id

Use the exptid command to set the new experiment id to 'ob01':

...

Code Block
languagebash
troifs0@cca-login3:> rm ICM*ob00*
rm: remove regular file `ICMCLob00INIT'? y
rm: remove regular file `ICMGGob00INIT'? y
rm: remove regular file `ICMGGob00INIUA'? y
rm: remove regular file `ICMSHob00INIT'? y

Further modification to initial files

Now the initial files with the correct experiment ID have been prepared, further modifications to the input files can be made.

To modify the SST parameterfor example, please see other tutorialthe tutorial on 'Modifying the SST'.

Creating the forecast experiment

Once the initial data directory is prepared, the next step is to create the experiment directory structure where the model will run from. This is not the same location as the initial files just created.

To setup create the experiment directory structure, use the 'createEX' command. To see what arguments it takes, use the command:

...

Code Block
languagebash
createEX --date 2015110100 -e ob00ob01

This would repeat create the ob00 ob01 forecast directories for a single date. The default is to create the experiment directory in your 'scratch' directory with the same name as the experiment id. e.g $HOME/scratch/ob01.

New experiment: ob01 - single data,

...

10 members

Following the creation of the initial files for the experiment id 'ob01' given above, let's create a forecast experiment with 5 ensemble members for a single forecast date:

Code Block
languagebash
troifs0@cca-login3:~> createEX -d 2015110100 -e ob01 -i scratch/inidir -m 5

which generates the output:

10

We tell this command it can find our new initial files in scratch/inidir. By default, this command will create the experiment directory also in scratch. Be careful not to confuse the initial data directory with the experiment directory. These should be kept separate.

Info

Note, one ensemble member '00' is always created, so the -m argument lists the total number of members and defaults to 1. Directories are numbered starting from zero.

Only use 10 members in total otherwise you will not be able to compare with the climatological SST experiment which uses 10 members.

This generates the output:

Code Block
languagebash
Creating directory structure for experiment ob01 in directory /scratch/ectrain/troifs0/ob01/...
Date :  2015110100
Copying files 
Code Block
languagebash
Creating directory structure for experiment ob01 in directory /scratch/ectrain/troifs0/ob01/...
Date :  2015110100
Copying files from directory (inidir):  scratch/inidir
Date:  2015110100
.........
Created forecast experiment directory : /scratch/ectrain/troifs0/ob01/2015110100/0005/
Linking files and copying namelist.
Using IFS data directory: /fwsm/lb/project/openifs/ifsdata/
Created forecast experiment directory : /scratch/ectrain/troifs0/ob01/2015110100/0106/
Linking files and copying namelist.
Using IFS data directory: /fwsm/lb/project/openifs/ifsdata/
Created forecast experiment directory : /scratch/ectrain/troifs0/ob01/2015110100/0207/
Linking files and copying namelist.
Using IFS data directory: /fwsm/lb/project/openifs/ifsdata/
Created forecast experiment directory : /scratch/ectrain/troifs0/ob01/2015110100/0308/
Linking files and copying namelist.
Using IFS data directory: /fwsm/lb/project/openifs/ifsdata/
Created forecast experiment directory : /scratch/ectrain/troifs0/ob01/2015110100/0409/
Linking files and copying namelist.
Using IFS data directory: /fwsm/lb/project/openifs/ifsdata/
CreatedAll forecast experiment directory : done: /scratch/ectrain/troifs0/ob01/ ready.

Directory structure

If we look at in the directory: /scratch/ectrain/troifs0/ob01/2015110100

...

Directory structure

If we look at in the directory: /scratch/ectrain/troifs0/ob01/2015110100, we , we see:

Code Block
languagebash
troifs0@ccb-login3:> cd scratch/ob01/2015110100
troifs0@ccb-login3:> ls
00  03  06  09  ICMCLob50INIT   ICMSHob50INIT  specwavein       wam_subgrid_0
01  04  07      ICMGGob50INIT   cdwavein       uwavein          wam_subgrid_1
02  05  08      ICMGGob50INIUA  sfcwindin      wam_grid_tables  wam_subgrid_2

Each member will be run in the numbered directories:  "00", "01", "02", "03", "04", and "05"so on. These contain:

Code Block
languagebash
troifs0@ccb-login3:> ls -l 00
total 16
lrwxrwxrwx 1 troifs0 ectrain   52 May 25 18:15 255l_2 -> /fwsm/lb/project/openifs/ifsdata/40r1/climate/255l_2
lrwxrwxrwx 1 troifs0 ectrain   16 May 25 18:15 ICMCLob50INIT -> ../ICMCLob50INIT
lrwxrwxrwx 1 troifs0 ectrain   16 May 25 18:15 ICMGGob50INIT -> ../ICMGGob50INIT
lrwxrwxrwx 1 troifs0 ectrain   17 May 25 18:15 ICMGGob50INIUA -> ../ICMGGob50INIUA
lrwxrwxrwx 1 troifs0 ectrain   16 May 25 18:15 ICMSHob50INIT -> ../ICMSHob50INIT
lrwxrwxrwx 1 troifs0 ectrain   11 May 25 18:15 cdwavein -> ../cdwavein
-rw-r----- 1 troifs0 ectrain 9886 May 25 18:15 fort.4
lrwxrwxrwx 1 troifs0 ectrain   49 May 25 18:15 ifsdata -> /fwsm/lb/project/openifs/ifsdata/40r1/climatology
lrwxrwxrwx 1 troifs0 ectrain   40 May 25 18:15 rtables -> /fwsm/lb/project/openifs/ifsdata/rtables
lrwxrwxrwx 1 troifs0 ectrain   12 May 25 18:15 sfcwindin -> ../sfcwindin
lrwxrwxrwx 1 troifs0 ectrain   13 May 25 18:15 specwavein -> ../specwavein
lrwxrwxrwx 1 troifs0 ectrain   10 May 25 18:15 uwavein -> ../uwavein
lrwxrwxrwx 1 troifs0 ectrain   18 May 25 18:15 wam_grid_tables -> ../wam_grid_tables
-rw-r----- 1 troifs0 ectrain 2220 May 25 18:15 wam_namelist
lrwxrwxrwx 1 troifs0 ectrain   16 May 25 18:15 wam_subgrid_0 -> ../wam_subgrid_0
lrwxrwxrwx 1 troifs0 ectrain   16 May 25 18:15 wam_subgrid_1 -> ../wam_subgrid_1
lrwxrwxrwx 1 troifs0 ectrain   16 May 25 18:15 wam_subgrid_2 -> ../wam_subgrid_2

...

Only the model namelists, fort.4 for the atmosphere, wam_namelist for the wave model, are individual to a forecast ensemble member directory.

Running the forecast experiment

The command 'oifs_run' is normally used to create and submit the batch job to run the model. This command has a large number of options. For this workshop, this command has been configured with the correct defaults for the T255 resolution seasonal length forecasts.

However, it would need to be run in each forecast member directory, which would be tedious.

The command 'run_all_ens' has been created to speed up submitting the batch jobs to run all the forecast members with one command.

For our 'ob01' experiment, this command would create all the jobs, correctly configure the namelist and submit the job. There will be 5 10 jobs, one forecast for each ensemble member:

...

Code Block
languagebash
.........
Ensemble member: 05
60c60
<       NENSFNB=5,
---
>       NENSFNB=0,                 ! Ensemble forecast number
Running command: oifs_run -e ob01 -x 1 -q
Copied /home/ectrain/troifs0/bin/cce-opt/master.exe to current directory
Using existing namelist in : fort.4
OpenIFS job created: job1
Submitted job: 8430146.ccapar
......

Setting the ensemble member

The command 'run_all_ens' does this step, this section provides further informationno action is needed. This section explains the use of the ensemble member number by IFS.

In IFS, each ensemble member uses the stochastic physics schemes scheme to generate uncertainty. A random number 'seed' is used by the stochastic scheme to generate a different forecast.

This random number seed is changed by altering the ensemble member value, NENSFNB, in the model's namelist file, fort.4. Each ensemble member must have a unique number and therefore random number seed, in order to produce a different forecast.

...

The only namelist variable that needs changing in these experiments is NENSFNB, which the run_all_ens command does for you.

Create and submit Cray batch job

To create the batch job, use the oifs_run command. This creates a small batch job file ready to submit.

Note

The stochastic backscatter scheme LSTOPH_SPBS should be disabled for OpenIFS 43r3 and beyond as it's use has been deprecated since 40r1. It's impact is very small, less so on the newer cubic grids, and is not recommended.


Create and submit Cray batch job

This step is done by the run_all_ens command. No user action is needed. This section provide more information on the oifs_run command.

To create the batch job, use the oifs_run command. This creates a small batch job file ready to submit.

You may need to use the oifs_run command if the batch job The run_all_ens command runs this command for each ensemble member, but you may need to use the oifs_run command if the batch job script 'job' in each ensemble directory needs to be recreated. Note if

Info

If you need to rerun the experiment, with no changes, simply resubmit the job with the command: qsub job, rather than use oifs_run to recreate the job.


Panel
borderColorgrey
bgColorwhite
borderWidth1
titleBGColorwhite
borderStyledotted
titleoifs_run command

oifs_run -e exptid  [-l namelist] [-f fcast len]

The common arguments to use are: -e, -l, -f. Other options are available which can usually be left to their default values.

Note!  the -f argument, the length of the forecast can be specified in various units: for example d10 means '10 days'.

Checking the model output

During the course of the model run, files with names like ICMGGob01+00000, ICMGGob01+00012 etc will appear in the ensemble member directory.

These are the model output files. There is 1 per output step. In these experiments the output interval is every 12 hours. It can be changed in the namelist, fort.4 but it's strongly recommended for these workshop, to keep the output interval at 12 hrs as this saves storage space.

After the The command qstatu command   shows that the status of the submitted job. When a job completes it will no longer appear in the batch queue.

After the job has completed, there will be a 'batch log file' in the ensemble directory:

Code Block
languagebash
titleExample batch job logfile
oifs_troifs1.o1481792

Using your favourite text editor, check this file to Check this file to make sure the model has run correctly.  It's usually best to start from the bottom of the file and scroll up.

...

the model run has failed.

The output from each model run goes into a directory named 'output1'. The '1' here is the run number and can be changed.

If the model fails, there are 2 files to look at in the 'output' directory:

...

Panel
bgColorwhite
titleBGColorwhite
titleContents of output directory after successful run

When the model completes the forecast successfully, the following files will be found in the output directory. ICMSH are the spectral fields, ICMGG are the gridpoint fields.

ICMGGclim_10u  ICMGGclim_ci    ICMGGclim_lsp   ICMGGclim_sd    ICMGGclim_sst   ICMGGclim_stl4   ICMGGclim_swvl3  ICMGGclim_tsr   ICMSHclim_sp  NODE.001_01

ICMGGclim_10v  ICMGGclim_cp    ICMGGclim_msl   ICMGGclim_slhf  ICMGGclim_stl1  ICMGGclim_str    ICMGGclim_swvl4  ICMGGclim_ttr   ICMSHclim_t   ifs.stat

ICMGGclim_2d   ICMGGclim_e     ICMGGclim_nsss  ICMGGclim_sshf  ICMGGclim_stl2  ICMGGclim_swvl1  ICMGGclim_tcc    ICMSHclim_d     ICMSHclim_vo  oifs.log

ICMGGclim_2t   ICMGGclim_ewss  ICMGGclim_q     ICMGGclim_ssr   ICMGGclim_stl3  ICMGGclim_swvl2  ICMGGclim_tp     ICMSHclim_lnsp  ICMSHclim_z

Postprocessing: preparing files for Metview

NEEDS UPDATING!!!!

The next step is to take the output of the successful run and prepare it for the Metview based exercises.

ICMGGclim_stl3  ICMGGclim_swvl2  ICMGGclim_tp     ICMSHclim_lnsp  ICMSHclim_z

Rerun the model job

A small file 'job1' (or jobN where N is the run number you used), is created in each of the ensemble member directories; 00, 01, 02, etc.

If for any reason the model fails, once you have determined the problem and corrected it, then the run can be submitted by:

Code Block
qsub job1

There is no need to rerun the run_all_ens command as this will resubmit ALL the ensemble members again.

Anchor
postprocessing
postprocessing
Postprocessing: preparing monthly means for plotting with Metview

The next step is to process the output of the successful run and create the monthly mean fields that can be transferred to ICTP for plotting using Metview.

A command, oifs_to_mv, is available to compute the monthly mean fields ready to be plotted. This command uses a metview script to do the processing.

The command takes a small number of arguments. For example, for an experiment labelled 'ob01', located in ~/scratch/ob01, type this command:As the data files and Metview scripts have been optimized by selecting specific areas, we have made a script available to do the work.

Code Block
languagebash
titleTo prepare the OpenIFS output for Metview, run this command:
 oifs_to_mv -e 00

This will generate files in scratch/metview.

If errors are encountered please let us know. If the command completes successfully (it will take a while), this task is complete. There is no need to inspect the files.

Info

This script may take a few minutes to run.

 oifs_to_mv -e ob01 -d 2015110100 -i ~/scratch/ob01


Info

The post-processing step can take some time to complete

This command only processes one date at a time. It needs to be run separately for multiple dates in the same experiment id.

Monthly mean files

After the oifs_to_mv command finishes, the monthly mean files can be found in the directory 'mmeans' inside the top level date directory. For example, for experiment id ob01, date 2015110100, the directory is: ob01/2015110100/mmeans.

Inside this directory, there is one file per parameter with 5 monthly averages: Dec, Jan, Feb, March and April, with filenames for example as: oif_ob01_u_20151101_00.grib.

Transfer to ICTP classroom PCs

The monthly mean Once the script has finished, the results files can be transferred to ICTP with the sftp command, to be used on the classroom PC with metview.

Useful Unix commands

 


Code Block
titleSubmit a job to the Cray..
qsub job

...

Code Block
Quota for $HOME and $PERM:
Disk quotas for user troifs0 (uid 16144): 
     Filesystem  blocks   quota   limit   grace   files   quota   limit   grace
cnasa1:/vol/home
                   226M    480M    500M             132   20000   22000        
Disk quotas for user troifs0 (uid 16144): 
     Filesystem  blocks   quota   limit   grace   files   quota   limit   grace
cnasa2:/vol/perm
                      0  26624M  27648M               1    200k    210k        

Quota for $SCRATCH ($TEMP) including $SCRATCHDIR ($TMPDIR):
Disk quotas for user troifs0 (uid 16144):
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
  /lus/snx11062 767099824  32212254720 32212254720       -  116570  5000000 5000000       -
Disk quotas for group ectrain (gid 1400):
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
  /lus/snx11062 767105640       0       0       -  118078       0       0       -

...


Excerpt Include
Credits
Credits
nopaneltrue