WRF
...
module available on
...
Atos
There are few WRF versions available on CC{A/B}. Use the module avail wrf to see the available versions. If you need any additional versions installed, please email servicedesk@ecmwf.int.
No Format |
---|
> module avail wrf
-------------------------------------- /usr/local/apps/modulefiles/tools_and_libraries/utilities --------------------------------------
wrf/3.9.1.1 wrf/4.0.3(default) wrf/4.1.5 |
Listed versions are available for Intel Programming Environment only. Since PrgEnv-cray is loaded as default at login user has to switch it to Intel in order to load wrf module:
is a WRF model already installed on Atos. The module is available only for Intel programming environment and Mellanox Open MPI because this combination has given the best performance. Consequently, the module is only available once the following pre-requisite modules are loaded:
Code Block | ||||
---|---|---|---|---|
| ||||
$> module load prgenv/intel hpcx-openmpi
$> module load wrf |
To see all pre-requisite conditions for the WRF one can use module spider command:
Code Block | ||||
---|---|---|---|---|
| ||||
$>module spider wrf |
If you need any additional versions installed, or under another programming environment please contact us via Support Portal.
...
WRF Example
To set up a working directory for running WPS , and WRF and UPP from the public install, use the WRF utility script called build_wrf_workdir. This script creates the appropriate folder structure for running WPS and WRF under your $PERM folder:
Code Block | ||
---|---|---|
| ||
$> module | ||
No Format | ||
$> module load wrf $> build$> build_wrf_workdir $> cd$> cd $PERM/WRFwrf $> ls$> ls run_IFS_4km |
This will load WPS , and WRF including chemistry, and UPP.
A sample submission script together with model configuration for a simple case is available in $PERM/WRFwrf/run_IFS_4km/. run_wrf.sh is is a self contained script that runs the test case for April 2019 using IFS boundary conditions. Aside from it, the directory also contains:
namelist.wps.in | WPS namelist template template | ||
namelist.input.in | WRF namelist template | ||
run_wps.qsubsbatch.in | script for submitting ungrib, geogrid, and metgrid using 1 node 32 tasks | ||
run_real. | qsubsbatch.in | script for submitting real.exe using | 1 node128 tasks |
run_wrf.qsubsbatch.inin | script for submitting wrf.exe using 7 nodes256 tasks |
To run the sample:
Code Block | ||
---|---|---|
| ||
$> cd $PERM/WRFwrf/run_IFS_4km/ $> ./run_wrf.sh |
This script creates model run directory in $SCRATCH/WRFwrf/run/, cd goes into it, copies all required input data, creates links, and executes all model components one by one.
Please note: build_wrf_workdir script should be executed only the first time you load specific version of the module to create a WRF structure in your $PERM directory. Every time you execute it, it will overwrite the existing $PERM/WRFwrf/run_IFS_4km/ structure and move the existing directory:
From: | To: |
run_IFS_4km/ | run_IFS_4km_old/ |
After it has been executed once, every other time you just have to load the module to run the model.
Geogrid Data
Geogrid data for various different resolutions (30", 2', 5', and 10') is currently available in /ec/res4/fws2hpcperm/wrf_geogusbk/geog/.
The 'geog_data_path' variable in WPS's namelist.wps has already been configured to use this greogrid data. Please note that this location could be unavailable for 3for 3-4 hours per year during system sessionssystem sessions. Consequently, any operational or "Time Critical" work should not be based on it.
Boundary Conditions
Boundary conditions from IFS HiRes are provided for date 30/04/2019 at 00 UTC +6 hours ahead and domain covering most of Europe. These BC files are already linked inside run_wrf.sh script. For other geographical regionsgeographical regions, BC can be download from publicly available publicly available ftp server:
For more information look: ECMWF WRF Test Data
How to install your own version of WRF on
...
Atos
Intel compilers & Open MPI
To install WRF with Intel compilers, the following modules need to be pre-loaded:
Code Block | ||||
---|---|---|---|---|
| ||||
$>module load prgenv/intel netcdf4 hpcx-openmpi jasper/2.0.14
$>module list
Currently Loaded Modules:
1) intel 2) prgenv/intel 3) netcdf4 4) hpcx-openmpi 5) jasper/2.0.14 |
WRF needs to be pointed to NETCDF location manually:
Code Block | ||
---|---|---|
| ||
export NETCDF=$NETCDF4_DIR |
In general, on ATOS we use -rpath option to link shared libraries. However, this is difficult to use with WRF because of the installation scripts structure which use NETCDF variable to link to NetCDF libraries. Consequently, in running script we need to export NetCDF library path:
Code Block | ||||
---|---|---|---|---|
| ||||
module load netcdf4
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$NETCDF4_DIR/lib |
Compilation should be submitted as a batch job and an example is given in the WRF module:
Code Block | ||
---|---|---|
| ||
$WRFPATH/WRF/compile.sh |
WPS:
In modules such as netcdf4 on Atos, libraries are linked using environmental variables:
Code Block | ||
---|---|---|
| ||
setenv("NETCDF4_LIB","-L/usr/local/apps/netcdf4/4.7.4/INTEL/19.1/lib -Wl,-rpath,/usr/local/apps/netcdf4/4.7.4/INTEL/19.1/lib -lnetcdff -lnetcdf_c++ -lnetcdf") |
To make usage of this approach, some modification are needed to native configure* files.
In case of WPS-master/configure, following lines should be replaced:
Code Block | ||||
---|---|---|---|---|
| ||||
#$FC ${FFLAGS} fort_netcdf.f -o fort_netcdf -L${NETCDF}/lib $NETCDFF -lnetcdf > /dev/null 2>&1
$FC ${FFLAGS} fort_netcdf.f -o fort_netcdf $NETCDF4_LIB > /dev/null 2>&1 |
After ./configure step, configure.wps has to be edited as well:
Code Block | ||
---|---|---|
| ||
# -I$(NETCDF)/include
$(NETCDF4_INCLUDE)
# -L$(NETCDF)/lib -lnetcdff -lnetcdf
$(NETCDF4_LIB)
#COMPRESSION_LIBS = -L/glade/u/home/wrfhelp/UNGRIB_LIBRARIES/lib -ljasper -lpng -lz
#COMPRESSION_INC = -I/glade/u/home/wrfhelp/UNGRIB_LIBRARIES/include
COMPRESSION_LIBS = $(JASPER_LIB) \
-L/usr/lib64 -lpng -lz
COMPRESSION_INC = $(JASPER_INCLUDE) \
-I/usr/include |
An example of compilation job is also available in the WRF module:
Code Block | ||
---|---|---|
| ||
$WRFPATH/WPS-master/compile.sh |
...
Several versions of WRF model have been installed and tested on CCA[B] using Cray and Intel compilers. Model efficiency is very similar with these two options and compilation is a bit faster with Intel.
To install it using Cray compiler (default) follow the directions in WRF official user guide. Prior to installation NETCDF variable has to be set:
No Format $> module load cray-netcdf $> export NETCDF=$NETCDF_DIR
Configuration option "Cray XE and XC CLE/Linux x86_64, Cray CCE compiler (dmpar)" should be selected.
To install it using Intel compiler, prior to installation programming environment has to be change from Cray (default) to Intel. Prior to installation NETCDF variable has to be set::
No Format $> prgenvswitchto intel $> module load cray-netcdf $> export NETCDF=$NETCDF_DIR
Configuration option "Linux x86_64, ifort compiler with icc (dmpar)" should be selected.
After that, the following changes should be made to configure.wps:
...
Everything else should be done following the official WRF user guide.
Computational cost
...
Intel compilers & Intel MPI
There are only a few differences in the compilation process with Intep MPI:
- The following modules need to be loaded before compilation:
Code Block | ||||
---|---|---|---|---|
| ||||
$>module list
Currently Loaded Modules:
1) intel/19.1.2 2) prgenv/intel 3) netcdf4 4) jasper/2.0.14 5) intel-mpi
|
- In configure.wrf and configure.wps make the following settings:
Code Block | ||||
---|---|---|---|---|
| ||||
SFC = ifort
SCC = icc
DM_FC = mpiifort
DM_CC = mpiicc |
Everything else is identical to compilation with Open MPI.
Please note that this is an approximate computational cost and the actual cost of running WRF depends on physics selected. Consequently, your numbers may vary from those given in the table.
As a guidance, increasing "nx" or "ny" two times will make computational cost 2* higher. Doubling model resolution while keeping the same domain size will make it 2*2*2 times more expensive.