To install WRF with Intel compilers, the following modules need to be pre-loaded:
$>module load prgenv/intel netcdf4 openmpi jasper $>module list Currently Loaded Modules: 1) intel/19.1.2 2) prgenv/intel 3) netcdf4/4.7.4 4) openmpi/4.0.5.2 5) jasper/2.0.14 |
WRF needs to be pointed to NETCDF location manually:
export NETCDF=$NETCDF_DIR |
In general, on ATOS we use -rpath option to link shared libraries. However, this is difficult to use with WRF because of the installation scripts structure which use NETCDF variable to link to NetCDF libraries. Consequently, in running script we need to export NetCDF library path:
module load netcdf4 export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$NETCDF4_DIR/lib |
CSH is not installed on TEMS. To execute CSH scripts one can use locally installed tcsh:
/perm/usxa/tems/apps/tcsh/6.22.03/bin/tcsh ./compile |
WPS:
In modules such as netcdf4 on TEMS, libraries are linked using environmental variables such as:
setenv("NETCDF4_LIB","-L/usr/local/apps/netcdf4/4.7.4/INTEL/19.1/lib -Wl,-rpath,/usr/local/apps/netcdf4/4.7.4/INTEL/19.1/lib -lnetcdff -lnetcdf_c++ -lnetcdf") |
To make usage of this approach, some modification are needed to native configure* files.
In case of WPS-master/configure, following lines should be replaced:
#$FC ${FFLAGS} fort_netcdf.f -o fort_netcdf -L${NETCDF}/lib $NETCDFF -lnetcdf > /dev/null 2>&1 $FC ${FFLAGS} fort_netcdf.f -o fort_netcdf $NETCDF4_LIB > /dev/null 2>&1 |
After ./configure step, configure.wps has to be edited as well:
# -I$(NETCDF)/include $(NETCDF4_INCLUDE) # -L$(NETCDF)/lib -lnetcdff -lnetcdf $(NETCDF4_LIB) #COMPRESSION_LIBS = -L/glade/u/home/wrfhelp/UNGRIB_LIBRARIES/lib -ljasper -lpng -lz #COMPRESSION_INC = -I/glade/u/home/wrfhelp/UNGRIB_LIBRARIES/include COMPRESSION_LIBS = $(JASPER_LIB) \ -L/usr/lib64 -lpng -lz COMPRESSION_INC = $(JASPER_INCLUDE) \ -I/usr/include |
There are only a few differences in the compilation process with Intep MPI:
$>module list Currently Loaded Modules: 1) intel/19.1.2 2) prgenv/intel 3) netcdf4/4.7.4 4) jasper/2.0.14 5) intel-mpi/19.1.2 |
SFC = ifort SCC = icc DM_FC = mpiifort DM_CC = mpiicc |
Everything else is identical to compilation with Open MPI.