Running WRF-SFIRE with real data in the WRFx system
wrf-fire
Requirements and environment
Install requiered libraries
- General requirements:
- C-shell
- Traditional UNIX utilities: zip, tar, make, etc.
- WRF-SFIRE requirements:
- Fortran and C compilers (Intel recomended)
- MPI libraries (same compiler)
- NetCDF libraries (same compiler)
- WPS requirements:
- zlib compression library (zlib)
- PNG reference library (libpng)
- JasPer compression library
- libtiff and geotiff libraries
See also https://www2.mmm.ucar.edu/wrf/users/prepare_for_compilation.html.
Set environtment
setenv NETCDF /where-netcdf-is setenv JASPERLIB /where-jasper-lib-is setenv JASPERINC /where-jasper-include-is setenv LIBTIFF /where-libtiff-is setenv GEOTIFF /where-geotiff-is setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1
Installation
Clone github repository
git clone https://github.com/openwfm/wrf-fire
Configure WRF-SFIRE
cd wrf-fire/wrfv2_fire ./configure
Options 7 (intel dmpar) and 1 (simple nesting) if available
Compile WRF-SFIRE
Compile em_real
./compile em_real >& compile_em_real.log & grep Error compile_em_real.log
If any compilation error, compile em_fire
./compile em_fire >& compile_em_fire.log & grep Error compile_em_fire.log
If any of the previous step fails:
./clean -a ./configure
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.
Configure WPS
cd ../WPS ./configure
Option 2 (serial) if available
Compile WPS
./compile >& compile_wps.log & ls -l *.exe
It should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not
./clean -a ./configure
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.
Get static data
cd ../.. wget http://math.ucdenver.edu/~jmandel/tmp/WPS-GEOG.tbz tar xvfj WPS-GEOG.tbz
wrfxpy
Requirements and environment
Install Anaconda distribution
Download and install the Python 3 Anaconda Python distribution for your platform. We recommend an installation into the users' home directory.
wget https://repo.continuum.io/archive/Anaconda3-2019.10-Linux-x86_64.sh chmod +x Anaconda3-2019.10-Linux-x86_64.sh ./Anaconda3-2019.10-Linux-x86_64.sh
Install necessary packages
We recommend the creation of an environment. Install pre-requisites:
conda update -n base -c defaults conda conda install basemap conda create -n wrfxpy python=3 netcdf4 pyproj paramiko dill scikit-learn scikit-image h5py psutil proj4 pytz conda activate wrfxpy conda install -c conda-forge simplekml pygrib f90nml pyhdf xmltodict basemap pip install MesoPy python-cmr
Note that conda and pip are package managers available in the Anaconda Python distribution.
Set environment
setenv PROJ_LIB "$HOME/anaconda3/share/proj"
Installation
Clone github repository
git clone https://github.com/openwfm/wrfxpy
Configuration
General configuration
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.
cd wrfxpy cp etc/conf.json.initial etc/conf.json
Configure the system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:
"workspace_path": "wksp" "wps_install_path": "path/to/WPS" "wrf_install_path": "path/to/WRF" "sys_install_path": "/path/to/wrfxpy" "wps_geog_path" : "/path/to/wps-geogrid" "wget" : /path/to/wget",
Note that all these paths are created from previous steps of this wiki unless wget path which needs to be specified. If not version of wget is prefered
which wget
Cluster configuration
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:
{ "speedy" : { "qsub_cmd" : "qsub", "qdel_cmd" : "qdel", "qstat_cmd" : "qstat", "qstat_arg" : "", "qsub_delimiter" : ".", "qsub_job_num_index" : 0, "qsub_script" : "etc/qsub/speedy.sub" } }
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:
%(nodes)d the number of nodes requested %(ppn)d the number of processors per node requested %(wall_time_hrs)d the number of hours requested %(exec_path)d the path to the wrf.exe that should be executed %(cwd)d the job working directory %(task_id)d a task id that can be used to identify the job %(np)d the total number of processes requested, equals nodes x ppn
Note that not all keys need to be used, as shown in the speedy example:
#$ -S /bin/bash #$ -N %(task_id)s #$ -wd %(cwd)s #$ -l h_rt=%(wall_time_hrs)d:00:00 #$ -pe mpich %(np)d mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s
The script template should be derived from a working submission script.
Tokens configuration
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid MesoWest user to download data automatically. Also, when downloading satellite data, one needs a token from an Earthdata user. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:
{ "mesowest" : "token-from-mesowest", "appkey" : "token-from-earthdata" }
So, if any of the previous capabilities are required, create a token from the specific page, do
cp etc/tokens.json.initial etc/tokens.json
and include your previously created token.