Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Code Block
languagebash
OIFS_EXPID="i4xc"       # your experiment ID
OIFS_NPROC=8            # the number of MPI tasks
OIFS_NTHREAD=4          # the number of OpenMP threads
OIFS_GRIDTYPE="l"       # the grid type, either 'l' for linear reduced grid, or 'o' for the cubic octahedral grid
OIFS_RES="255"          # the spectral grid resolution
OIFS_NAMELIST='fort.4'  # the name of the atmospheric namelist file (the default is fort.4, so this line could be omitted)
OIFS_EXE="${OIFS_HOME}/build/bin/ifsMASTER.DP"  # the name and location of the model binary executable
OIFS_PPROC=true         # enable postprocessing of model output after the model run
OUTPUT_ROOT=$(pwd)      # folder where pproc output is created (only used if OIFS_PPROC=true). In this case an output folder is created in the experiment directory. 
LFORCE=false            # overwrite existing symbolic links in the experiment directory
LAUNCH=""               # the platform specific run command for the MPI environment (e.g. "mpirun", "srun", etc). 

Running the experiment

After all edits to namelists and config.h have been completed the model run can be started.

Depending on the available hardware experiments can either be run interactively or as a batch job.

In order to run the experiment interactively run ./oifs-run in your terminal. If no command line parameters are provided with the oifs-run command, then the values from the config.h will be used.

On the ECMWF hpc2020, running the model script interactively should be fine for lower grid resolutions up to T255L91.  If the LAUNCH variable in config.h remains empty (and no --runcmd parameter is provided in the command line) then the oifs-run script will use its default launch parameters:  srun -c${OIFS_NPROC} --mem=64GB --time=60  which will work fine for OIFS_NPROC=4 or 8 with experiment i4xc. 

Alternatively, a job script could be used for submitting the script to the batch scheduler. An example script for the SLURM batch schedule, used on the ECMWF hpc2020 is provided here: $OIFS_HOME/bin/run.atos.sh

This script should be edited as required. The LAUNCH command here is only "srun" without any further options, as the parallel environment settings are provided through the script headers.