http://wiki.openwfm.org/mediawiki/api.php?action=feedcontributions&user=Afarguell&feedformat=atomopenwfm - User contributions [en]2024-03-28T16:53:05ZUser contributionsMediaWiki 1.35.1http://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4534Running WRF-SFIRE with real data in the WRFx system2023-11-01T17:26:47Z<p>Afarguell: /* Tokens configuration */</p>
<hr />
<div>[[Category:WRF-Fire|User's guide]]<br />
[[Category:WRF-SFIRE users guide]]<br />
[[Category:Howtos|Set up WRFx]]<br />
{{users guide}}<br />
<br />
Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv HD5 /path/to/hdf5<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
HD5 is optional, but without it, configure have you use uncompressed netcdf files.<br />
<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
<br />
WRF expects $NETCDF and $HD5 to have subdirectories include, lib, and modules. <br />
To use system-installed netcdf and hdf5, <br />
on a system that uses standard /usr/lib (such as Ubuntu), you may be able to use simply<br />
setenv NETCDF /usr<br />
setenv HD5 /usr<br />
On a Linux that uses /usr/lib64 (such as Redhat and Centos), make a directory with the links<br />
include -> /usr/include<br />
lib -> /usr/lib64<br />
modules -> /usr/lib64/gfortran/modules<br />
and point NETCDF and HD5 to it.<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure CHEM (optional)===<br />
setenv WRF_CHEM 1<br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available.<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
To be able to run real problems, compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
export WRF_DIR=/path/to/WRF-SFIRE<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains land use, elevation, soil type data, etc for WRF (geogrid.exe to be specific).<br />
cd ..<br />
wget <nowiki>https://demo.openwfm.org/web/wrfx/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive conda] distribution for your platform. <br />
We recommend an installation into the users' home directory. For example,<br />
wget <nowiki>https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh</nowiki><br />
chmod +x Miniconda3-latest-Linux-x86_64.sh<br />
./Miniconda3-latest-Linux-x86_64.sh<br />
The installation may instruct you to exit and log in again.<br />
<br />
On a shared system, you may have a system-wide Python distribution with conda already installed, perhaps as a module, try module avail.<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install prerequisites:<br />
wget https://demo.openwfm.org/web/wrfx/wrfx.yml<br />
conda create -n wrfx -f wrfx.yml<br />
Note: the versions listed in the yml file may not be available on platforms other than Linux x86-64 (most common). Then you can try to have conda find the versions and do instead:<br />
conda create -n wrfx python=3.8<br />
conda install -c conda-forge gdal<br />
conda install -c conda-forge netcdf4 h5py pyhdf pygrib f90nml lxml simplekml pytz pandas scipy<br />
conda install -c conda-forge basemap paramiko dill psutil flask wgrib2<br />
pip install MesoPy python-cmr shapely==2<br />
<br />
===Set environment===<br />
Every time before using WRFx, make the packages available by<br />
conda activate wrfx<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://mesowest.utah.edu/ MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token for some [https://earthdata.nasa.gov/ Earthdata] data centers. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-synopticdata.com",<br />
"ladds" : "token-from-laads",<br />
"nrt" : "token-from-lance"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running the fuel moisture model, a new MesoWest user can be created in [https://developers.synopticdata.com/ MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. Also, the user can specify a list of tokens to use.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the tokens from the respective data centers can be acquired and replaced in the etc/tokens.json file ([https://ladsweb.modaps.eosdis.nasa.gov LAADS] and [https://nrt3.modaps.eosdis.nasa.gov LANCE]). There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====AWS acquisition====<br />
<br />
For getting GOES16 and GOES17 data and as an optional acquisition method for GRIB files, the system is using [https://aws.amazon.com/cli/ AWS Command Line Interface]. So, you would need to have it installed. To look if you have already installed it, you can just type<br />
<br />
aws help<br />
<br />
If the command is not found, you can follow installation instructions from [https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html here]. If you are using Linux, you can do:<br />
<br />
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />
unzip awscliv2.zip<br />
./aws/install -i /path/to/lib -b /path/to/bin<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrfxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfx<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
This will generate the experiment in the path specified in the etc/conf.json file and under a workspace subdirectory created from the experiment name, submit the job to your batch scheduler, and postprocess results and send them to your installation of wrfxweb. If you do not have wrfxweb, no worries, you can always get the files of the WRF-SFIRE run in subdirectory wrf of your experiment directotry. You can also inspect the files generated, modify them, and resubmit the job.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run a serial real.exe. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE:<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE WRF-SFIRE-serial</nowiki><br />
cd WRF-SFIRE-serial<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
wrfxweb is a web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb requirements===<br />
<br />
wrfxweb runs in a regular user account on a Linux server equipped with a web server. You need to be able to <br />
* transfer files and execute remote commands on the machine by passwordless ssh with key authentication without a passkey<br />
* access the directory wrfxweb/fdds in your account from the web<br />
<br />
===wrfxweb: server setup===<br />
<br />
You can set up your own server. We are using Ubuntu Linux with nginx web server, but other software should work too. Configuring the web server to use https is recommended. The resource requirements are modest, 2 cores and 4GB memory are more than sufficient. Simulations can be large, easily several GB each, so provision sufficient disk space. <br />
<br />
We can provide a limited amount of resources on our demo server to collaborators. To use our server, first make an ssh key on the machine where you run wrfxpy:<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Then send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
If your request is approved, you will be able to ssh to the demo server without any password.<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following keys in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
"organization": "Organization Name"<br />
"flags": ["Flag 1", "Flag 2", ...]<br />
<br />
If no flags are required, one can specify an empty list or remove the key.<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. They default to wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4527Running WRF-SFIRE with real data in the WRFx system2023-10-04T22:38:39Z<p>Afarguell: /* Install necessary packages */</p>
<hr />
<div>[[Category:WRF-Fire|User's guide]]<br />
[[Category:WRF-SFIRE users guide]]<br />
[[Category:Howtos|Set up WRFx]]<br />
{{users guide}}<br />
<br />
Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv HD5 /path/to/hdf5<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
HD5 is optional, but without it, configure have you use uncompressed netcdf files.<br />
<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
<br />
WRF expects $NETCDF and $HD5 to have subdirectories include, lib, and modules. <br />
To use system-installed netcdf and hdf5, <br />
on a system that uses standard /usr/lib (such as Ubuntu), you may be able to use simply<br />
setenv NETCDF /usr<br />
setenv HD5 /usr<br />
On a Linux that uses /usr/lib64 (such as Redhat and Centos), make a directory with the links<br />
include -> /usr/include<br />
lib -> /usr/lib64<br />
modules -> /usr/lib64/gfortran/modules<br />
and point NETCDF and HD5 to it.<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure CHEM (optional)===<br />
setenv WRF_CHEM 1<br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available.<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
To be able to run real problems, compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
export WRF_DIR=/path/to/WRF-SFIRE<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains land use, elevation, soil type data, etc for WRF (geogrid.exe to be specific).<br />
cd ..<br />
wget <nowiki>https://demo.openwfm.org/web/wrfx/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive conda] distribution for your platform. <br />
We recommend an installation into the users' home directory. For example,<br />
wget <nowiki>https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh</nowiki><br />
chmod +x Miniconda3-latest-Linux-x86_64.sh<br />
./Miniconda3-latest-Linux-x86_64.sh<br />
The installation may instruct you to exit and log in again.<br />
<br />
On a shared system, you may have a system-wide Python distribution with conda already installed, perhaps as a module, try module avail.<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install prerequisites:<br />
wget https://demo.openwfm.org/web/wrfx/wrfx.yml<br />
conda create -n wrfx -f wrfx.yml<br />
Note: the versions listed in the yml file may not be available on platforms other than Linux x86-64 (most common). Then you can try to have conda find the versions and do instead:<br />
conda env create -n wrfx python=3.8<br />
conda install -c conda-forge gdal<br />
conda install -c conda-forge netcdf4 h5py pyhdf pygrib f90nml lxml simplekml pytz pandas scipy<br />
conda install -c conda-forge basemap paramiko dill psutil flask wgrib2<br />
pip install MesoPy python-cmr shapely==2<br />
<br />
===Set environment===<br />
Every time before using WRFx, make the packages available by<br />
conda activate wrfx<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://mesowest.utah.edu/ MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token for some [https://earthdata.nasa.gov/ Earthdata] data centers. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-synopticdata.com",<br />
"ladds" : "token-from-laads",<br />
"nrt" : "token-from-lance"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running the fuel moisture model, a new MesoWest user can be created in [https://developers.synopticdata.com/ MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. Also, the user can specify a list of tokens to use.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the tokens from the respective data centers can be acquired and replaced in the etc/tokens.json file ([https://ladsweb.modaps.eosdis.nasa.gov/profile/#generate-token LAADS] and [https://nrt3.modaps.eosdis.nasa.gov/profile/app-keys LANCE]). There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====AWS acquisition====<br />
<br />
For getting GOES16 and GOES17 data and as an optional acquisition method for GRIB files, the system is using [https://aws.amazon.com/cli/ AWS Command Line Interface]. So, you would need to have it installed. To look if you have already installed it, you can just type<br />
<br />
aws help<br />
<br />
If the command is not found, you can follow installation instructions from [https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html here]. If you are using Linux, you can do:<br />
<br />
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />
unzip awscliv2.zip<br />
./aws/install -i /path/to/lib -b /path/to/bin<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrfxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfx<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
This will generate the experiment in the path specified in the etc/conf.json file and under a workspace subdirectory created from the experiment name, submit the job to your batch scheduler, and postprocess results and send them to your installation of wrfxweb. If you do not have wrfxweb, no worries, you can always get the files of the WRF-SFIRE run in subdirectory wrf of your experiment directotry. You can also inspect the files generated, modify them, and resubmit the job.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run a serial real.exe. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE:<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE WRF-SFIRE-serial</nowiki><br />
cd WRF-SFIRE-serial<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
wrfxweb is a web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb requirements===<br />
<br />
wrfxweb runs in a regular user account on a Linux server equipped with a web server. You need to be able to <br />
* transfer files and execute remote commands on the machine by passwordless ssh with key authentication without a passkey<br />
* access the directory wrfxweb/fdds in your account from the web<br />
<br />
===wrfxweb: server setup===<br />
<br />
You can set up your own server. We are using Ubuntu Linux with nginx web server, but other software should work too. Configuring the web server to use https is recommended. The resource requirements are modest, 2 cores and 4GB memory are more than sufficient. Simulations can be large, easily several GB each, so provision sufficient disk space. <br />
<br />
We can provide a limited amount of resources on our demo server to collaborators. To use our server, first make an ssh key on the machine where you run wrfxpy:<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Then send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
If your request is approved, you will be able to ssh to the demo server without any password.<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following keys in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
"organization": "Organization Name"<br />
"flags": ["Flag 1", "Flag 2", ...]<br />
<br />
If no flags are required, one can specify an empty list or remove the key.<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. They default to wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4526Running WRF-SFIRE with real data in the WRFx system2023-10-04T22:37:47Z<p>Afarguell: /* WRFxPy: Testing */</p>
<hr />
<div>[[Category:WRF-Fire|User's guide]]<br />
[[Category:WRF-SFIRE users guide]]<br />
[[Category:Howtos|Set up WRFx]]<br />
{{users guide}}<br />
<br />
Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv HD5 /path/to/hdf5<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
HD5 is optional, but without it, configure have you use uncompressed netcdf files.<br />
<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
<br />
WRF expects $NETCDF and $HD5 to have subdirectories include, lib, and modules. <br />
To use system-installed netcdf and hdf5, <br />
on a system that uses standard /usr/lib (such as Ubuntu), you may be able to use simply<br />
setenv NETCDF /usr<br />
setenv HD5 /usr<br />
On a Linux that uses /usr/lib64 (such as Redhat and Centos), make a directory with the links<br />
include -> /usr/include<br />
lib -> /usr/lib64<br />
modules -> /usr/lib64/gfortran/modules<br />
and point NETCDF and HD5 to it.<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure CHEM (optional)===<br />
setenv WRF_CHEM 1<br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available.<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
To be able to run real problems, compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
export WRF_DIR=/path/to/WRF-SFIRE<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains land use, elevation, soil type data, etc for WRF (geogrid.exe to be specific).<br />
cd ..<br />
wget <nowiki>https://demo.openwfm.org/web/wrfx/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive conda] distribution for your platform. <br />
We recommend an installation into the users' home directory. For example,<br />
wget <nowiki>https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh</nowiki><br />
chmod +x Miniconda3-latest-Linux-x86_64.sh<br />
./Miniconda3-latest-Linux-x86_64.sh<br />
The installation may instruct you to exit and log in again.<br />
<br />
On a shared system, you may have a system-wide Python distribution with conda already installed, perhaps as a module, try module avail.<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install prerequisites:<br />
wget https://demo.openwfm.org/web/wrfx/wrfx.yml<br />
conda create -n wrfx -f wrfx.yml<br />
Note: the versions listed in the yml file may not be available on platforms other than Linux x86-64 (most common). Then you can try to have conda find the versions and do instead:<br />
conda env create -n wrfx python=3.8<br />
conda install -c conda-forge gdal<br />
conda install -c conda-forge netcdf4 h5py pyhdf pygrib f90nml lxml simplekml pytz pandas scipy<br />
conda install -c conda-forge basemap paramiko dill psutil flask<br />
pip install MesoPy python-cmr shapely==2<br />
<br />
===Set environment===<br />
Every time before using WRFx, make the packages available by<br />
conda activate wrfx<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://mesowest.utah.edu/ MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token for some [https://earthdata.nasa.gov/ Earthdata] data centers. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-synopticdata.com",<br />
"ladds" : "token-from-laads",<br />
"nrt" : "token-from-lance"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running the fuel moisture model, a new MesoWest user can be created in [https://developers.synopticdata.com/ MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. Also, the user can specify a list of tokens to use.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the tokens from the respective data centers can be acquired and replaced in the etc/tokens.json file ([https://ladsweb.modaps.eosdis.nasa.gov/profile/#generate-token LAADS] and [https://nrt3.modaps.eosdis.nasa.gov/profile/app-keys LANCE]). There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====AWS acquisition====<br />
<br />
For getting GOES16 and GOES17 data and as an optional acquisition method for GRIB files, the system is using [https://aws.amazon.com/cli/ AWS Command Line Interface]. So, you would need to have it installed. To look if you have already installed it, you can just type<br />
<br />
aws help<br />
<br />
If the command is not found, you can follow installation instructions from [https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html here]. If you are using Linux, you can do:<br />
<br />
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />
unzip awscliv2.zip<br />
./aws/install -i /path/to/lib -b /path/to/bin<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrfxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfx<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
This will generate the experiment in the path specified in the etc/conf.json file and under a workspace subdirectory created from the experiment name, submit the job to your batch scheduler, and postprocess results and send them to your installation of wrfxweb. If you do not have wrfxweb, no worries, you can always get the files of the WRF-SFIRE run in subdirectory wrf of your experiment directotry. You can also inspect the files generated, modify them, and resubmit the job.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run a serial real.exe. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE:<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE WRF-SFIRE-serial</nowiki><br />
cd WRF-SFIRE-serial<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
wrfxweb is a web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb requirements===<br />
<br />
wrfxweb runs in a regular user account on a Linux server equipped with a web server. You need to be able to <br />
* transfer files and execute remote commands on the machine by passwordless ssh with key authentication without a passkey<br />
* access the directory wrfxweb/fdds in your account from the web<br />
<br />
===wrfxweb: server setup===<br />
<br />
You can set up your own server. We are using Ubuntu Linux with nginx web server, but other software should work too. Configuring the web server to use https is recommended. The resource requirements are modest, 2 cores and 4GB memory are more than sufficient. Simulations can be large, easily several GB each, so provision sufficient disk space. <br />
<br />
We can provide a limited amount of resources on our demo server to collaborators. To use our server, first make an ssh key on the machine where you run wrfxpy:<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Then send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
If your request is approved, you will be able to ssh to the demo server without any password.<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following keys in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
"organization": "Organization Name"<br />
"flags": ["Flag 1", "Flag 2", ...]<br />
<br />
If no flags are required, one can specify an empty list or remove the key.<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. They default to wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4525Running WRF-SFIRE with real data in the WRFx system2023-10-04T22:37:17Z<p>Afarguell: /* wrxpy: Testing */</p>
<hr />
<div>[[Category:WRF-Fire|User's guide]]<br />
[[Category:WRF-SFIRE users guide]]<br />
[[Category:Howtos|Set up WRFx]]<br />
{{users guide}}<br />
<br />
Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv HD5 /path/to/hdf5<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
HD5 is optional, but without it, configure have you use uncompressed netcdf files.<br />
<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
<br />
WRF expects $NETCDF and $HD5 to have subdirectories include, lib, and modules. <br />
To use system-installed netcdf and hdf5, <br />
on a system that uses standard /usr/lib (such as Ubuntu), you may be able to use simply<br />
setenv NETCDF /usr<br />
setenv HD5 /usr<br />
On a Linux that uses /usr/lib64 (such as Redhat and Centos), make a directory with the links<br />
include -> /usr/include<br />
lib -> /usr/lib64<br />
modules -> /usr/lib64/gfortran/modules<br />
and point NETCDF and HD5 to it.<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure CHEM (optional)===<br />
setenv WRF_CHEM 1<br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available.<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
To be able to run real problems, compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
export WRF_DIR=/path/to/WRF-SFIRE<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains land use, elevation, soil type data, etc for WRF (geogrid.exe to be specific).<br />
cd ..<br />
wget <nowiki>https://demo.openwfm.org/web/wrfx/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive conda] distribution for your platform. <br />
We recommend an installation into the users' home directory. For example,<br />
wget <nowiki>https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh</nowiki><br />
chmod +x Miniconda3-latest-Linux-x86_64.sh<br />
./Miniconda3-latest-Linux-x86_64.sh<br />
The installation may instruct you to exit and log in again.<br />
<br />
On a shared system, you may have a system-wide Python distribution with conda already installed, perhaps as a module, try module avail.<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install prerequisites:<br />
wget https://demo.openwfm.org/web/wrfx/wrfx.yml<br />
conda create -n wrfx -f wrfx.yml<br />
Note: the versions listed in the yml file may not be available on platforms other than Linux x86-64 (most common). Then you can try to have conda find the versions and do instead:<br />
conda env create -n wrfx python=3.8<br />
conda install -c conda-forge gdal<br />
conda install -c conda-forge netcdf4 h5py pyhdf pygrib f90nml lxml simplekml pytz pandas scipy<br />
conda install -c conda-forge basemap paramiko dill psutil flask<br />
pip install MesoPy python-cmr shapely==2<br />
<br />
===Set environment===<br />
Every time before using WRFx, make the packages available by<br />
conda activate wrfx<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://mesowest.utah.edu/ MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token for some [https://earthdata.nasa.gov/ Earthdata] data centers. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-synopticdata.com",<br />
"ladds" : "token-from-laads",<br />
"nrt" : "token-from-lance"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running the fuel moisture model, a new MesoWest user can be created in [https://developers.synopticdata.com/ MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. Also, the user can specify a list of tokens to use.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the tokens from the respective data centers can be acquired and replaced in the etc/tokens.json file ([https://ladsweb.modaps.eosdis.nasa.gov/profile/#generate-token LAADS] and [https://nrt3.modaps.eosdis.nasa.gov/profile/app-keys LANCE]). There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====AWS acquisition====<br />
<br />
For getting GOES16 and GOES17 data and as an optional acquisition method for GRIB files, the system is using [https://aws.amazon.com/cli/ AWS Command Line Interface]. So, you would need to have it installed. To look if you have already installed it, you can just type<br />
<br />
aws help<br />
<br />
If the command is not found, you can follow installation instructions from [https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html here]. If you are using Linux, you can do:<br />
<br />
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />
unzip awscliv2.zip<br />
./aws/install -i /path/to/lib -b /path/to/bin<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===WRFxPy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfx<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
This will generate the experiment in the path specified in the etc/conf.json file and under a workspace subdirectory created from the experiment name, submit the job to your batch scheduler, and postprocess results and send them to your installation of wrfxweb. If you do not have wrfxweb, no worries, you can always get the files of the WRF-SFIRE run in subdirectory wrf of your experiment directotry. You can also inspect the files generated, modify them, and resubmit the job.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run a serial real.exe. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE:<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE WRF-SFIRE-serial</nowiki><br />
cd WRF-SFIRE-serial<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
wrfxweb is a web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb requirements===<br />
<br />
wrfxweb runs in a regular user account on a Linux server equipped with a web server. You need to be able to <br />
* transfer files and execute remote commands on the machine by passwordless ssh with key authentication without a passkey<br />
* access the directory wrfxweb/fdds in your account from the web<br />
<br />
===wrfxweb: server setup===<br />
<br />
You can set up your own server. We are using Ubuntu Linux with nginx web server, but other software should work too. Configuring the web server to use https is recommended. The resource requirements are modest, 2 cores and 4GB memory are more than sufficient. Simulations can be large, easily several GB each, so provision sufficient disk space. <br />
<br />
We can provide a limited amount of resources on our demo server to collaborators. To use our server, first make an ssh key on the machine where you run wrfxpy:<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Then send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
If your request is approved, you will be able to ssh to the demo server without any password.<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following keys in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
"organization": "Organization Name"<br />
"flags": ["Flag 1", "Flag 2", ...]<br />
<br />
If no flags are required, one can specify an empty list or remove the key.<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. They default to wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=How_to_run_WRF-SFIRE_with_real_data&diff=4524How to run WRF-SFIRE with real data2023-10-02T23:57:06Z<p>Afarguell: </p>
<hr />
<div>{{historical|Running WRF-SFIRE with real data in the WRFx system}}<br />
Running WRF-SFIRE with real data is a process very similar to running WRF with real data for weather simulations.<br />
The [http://www.mmm.ucar.edu/wrf/users WRF users page] has many <br />
[https://www2.mmm.ucar.edu/wrf/users/docs/user_guide_v4/contents.html documents] and <br />
[http://www2.mmm.ucar.edu/wrf/users/supports/tutorial.html tutorials] outlining this process. The purpose<br />
of this page is to provide a tutorial for using real data with WRF-SFIRE starting from scratch. We begin with a quick outline of the<br />
steps involved including links to the output of each step. The user can use these linked files to start from any step or to verify their own results. Due to platform and compiler differences your output might differ slightly from those provided.<br />
<br />
''This page refers to data sources for the USA only. For other countries, you will need to make appropriate modifications yourself.''<br />
<br />
=Outline=<br />
<br />
# [[How_to_get_WRF-SFIRE|Compile WRF-SFIRE source code]] with target em_real.<br />
# [[#Compiling WPS|Compile WPS]].<br />
# [[#Configuring_the_domain|Configure your domain]].<br />
# [[#Obtaining data for geogrid|Download geogrid datasets]].<br />
# [[#Converting fire data|Converting fire data]].<br />
# [[#Running geogrid|Run the geogrid executable]].<br />
# [[#Obtaining atmospheric data|Download atmospheric data]].<br />
# [[#Running ungrib|Run the ungrib executable]].<br />
# [[#Running metgrid|Run the metgrid executable]].<br />
# [[#Running wrf|Run real.exe and wrf.exe]].<br />
<br />
=Compiling WPS=<br />
<br />
After you have compiled WRF-SFIRE, <code>git clone https://github.com/openwfm/WPS</code> at the same directory level as WRF-SFIRE, change to <code>WPS</code> and run <br />
<code>./configure</code>. This will present you with a list of configuration options similar to those given by WRF.<br />
You will need to chose one with the same compiler that you used to compile WRF-SFIRE. Generally, it is unnecessary to compile WPS with parallel support.<br />
GRIB2 support is only necessary if your atmospheric data source requires it. Once you have chosen a configuration, you can compile with<br />
<pre>./compile >& compile.log</pre><br />
Make sure to check for errors in the log file generated.<br />
<br />
=Configuring the domain=<br />
<br />
The physical domain is configured in the geogrid section of <tt>namelist.wps</tt> in the WPS directory. In this section, you should define<br />
the geographic projection with <tt>map_proj</tt>, <tt>truelat1</tt>, <tt>truelat2</tt>, and <tt>stand_lon</tt>. Available projections<br />
include <tt>'lambert'</tt>, <tt>'polar'</tt>, <tt>'mercator'</tt>, and <tt>'lat-lon'</tt>. The center of the coarse domain is located at <tt>ref_lon</tt> longitude and <tt>ref_lat</tt> latitude. The computational grid is defined by <tt>e_we/e_sn</tt>, the number of (staggered) grid points in the west-east/south-north direction, and the grid resolution is defined by <tt>dx</tt> and <tt>dy</tt> in meters. <br />
We also specify a path to where we will put the static dataset that geogrid will read from, and we specify the highest resolution (.3 arc seconds) that this data is released in.<br />
<br />
<pre>&geogrid<br />
e_we = 97,<br />
e_sn = 97,<br />
geog_data_res = '.3s',<br />
dx = 100,<br />
dy = 100,<br />
map_proj = 'lambert',<br />
ref_lat = 39.728996<br />
ref_lon = -112.48999<br />
truelat1 = 39.5<br />
truelat2 = 39.9<br />
stand_lon = -112.8<br />
geog_data_path = '../WPS_GEOG'<br />
/</pre><br />
<br />
The share section of the WPS namelist defines the fire subgrid refinement in <tt>subgrid_ratio_x</tt> and <tt>subgrid_ratio_y</tt>. This means that the fire grid will be a 20 time refined grid at a resolution of 5 meters by 5 meters. The <tt>start_date</tt> and <tt>end_data</tt> parameters specify the time window that the simulation will be run in. Atmospheric data must be available at both temporal boundaries. The <tt>interval_seconds</tt> parameter tells WPS the number of seconds between each atmospheric dataset. For our example, we will be using the CFSR dataset which is released daily every six hours or 21,600 seconds.<br />
<br />
<pre>&share<br />
wrf_core = 'ARW',<br />
max_dom = 1,<br />
start_date = '2018-09-08_00:00:00',<br />
end_date = '2018-09-08_06:00:00',<br />
interval_seconds = 21600,<br />
io_form_geogrid = 2,<br />
subgrid_ratio_x = 20,<br />
subgrid_ratio_y = 20,<br />
/</pre><br />
The full namelist used can be found in [https://pastebin.com/6rV2Qg8Y pastebin] or [https://home.chpc.utah.edu/~u6015636/wiki/namelist.wps namelist.wps].<br />
<br />
=Obtaining data for geogrid=<br />
<br />
First, you must download and uncompress the standard [https://www2.mmm.ucar.edu/wrf/src/wps_files/geog_high_res_mandatory.tar.gz geogrid input data] as explained [https://www2.mmm.ucar.edu/wrf/users/download/get_sources_wps_geog.html here].<br />
This is a 2.6 GB compressed tarball that uncompresses to around 29 GB. It contains all of the static data that geogrid needs for a standard weather simulation; however, for a WRF-SFIRE simulation we need to fill in two additional fields that are too big to release in a single download for the whole globe. We first need to determine the approximate latitude and longitude bounds for our domain.<br />
<br />
We know the coordinates in the center from the <tt>ref_lon</tt> and <tt>ref_lat</tt> parameters of the namelist. We can estimate the<br />
coordinates of the lower-left corner and upper-right corner by the approximate ratio 9e-6 degrees per meter. So, the lower-left and upper-right corners of our domain are at approximately <br />
<pre>ref_lon ± (97-1)/2*100*9e-6<br />
ref_lat ± (97-1)/2*100*9e-6</pre><br />
Therefore for the purposes of downloading data, we will expand this region to the range -112.55 through -112.4 longitude and 39.65 through 39.8 latitude.<br />
<br />
==Downloading fuel category data==<br />
<br />
For the United States, Anderson 13 fuel category data is available at the [https://landfire.cr.usgs.gov/viewer/viewer.html Landfire] website. Upon opening the national map, click on the <tt>Download Tool</tt> [1] and you will see a menu on the right of the screen. Click on the <tt>LF 2016 Remap (LF_200)</tt>, then <tt>Fuel</tt>, and <tt>us_200 13 Fire Behavior Fuel Models-Anderson</tt> [2]. Finally, click on the <tt>Define Download Area By Coordinates</tt> button [3].<br />
<br />
[[File:Landfire_new1.png|700px|center]]<br />
<br style="clear: both" /><br />
<br />
This will open a new window on the right with a form that lets you key in the longitude and latitude range of your selection. In this window, we will input the coordinates computed earlier [4], and below we will click the <tt>Download Area</tt> button [5].<br />
<br />
[[File:Landfire_new2.png|700px|center]]<br />
<br style="clear: both" /><br />
<br />
In the next window, click on the <tt>Modify</tt> button [6]. This will open a new window listing all of the available data products for the selected region. Make sure only the box next to <tt>US_200 13 Fire Behavior Fuel Models-Anderson</tt> is checked and change the data format from <tt>ArcGRID_with_attribs</tt> to <tt>GeoTIFF_with _attribs</tt>. At the bottom make sure <tt>Maximum size (MB) per piece:</tt> is set to 250. Then go to the bottom of the page and click <tt>Save Changes & Return to Summary</tt>.<br />
[[File:Landfire_new3.png|700px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, click on the <tt>Download</tt> button [7]. The file will be a compressed archive containing, among others, a GeoTIFF file. The name of the file will be different for each request, but in this example we have [https://home.chpc.utah.edu/~u6015636/wiki/lf45409014_US_200FBFM13.zip lf45409014_US_200FBFM13.zip] containing the GeoTIFF file <tt>US_200FBFM13.tif</tt>, which can be found [[File:US_200FBFM13.tif]] or [https://home.chpc.utah.edu/~u6015636/wiki/US_200FBFM13.tif US_200FBFM13.tif].<br />
<br />
[[File:Landfire_new4.png|700px|center]]<br />
<br style="clear: both" /><br />
<br />
==Downloading high resolution elevation data==<br />
<br />
For the United States, elevation data is also available at the [https://landfire.cr.usgs.gov/viewer/viewer.html Landfire] website. Repeat the steps described above for downloading the fuel data, but selecting instead <tt>Topographic</tt> and <tt>us_Elevation</tt>. <br />
<br />
[[File:Landfire_new5.png|700px|center]]<br />
<br />
Again, we key in the coordinates determined before and click the <tt>Download Area</tt> button. <br />
<br style="clear: both" /><br />
In the next window click again <tt>Modify</tt>, make sure only <tt>us_Elevation</tt> is selected, change the format to <tt>Geotiff</tt> and click <tt>Save Changes & Return to Summary</tt><br />
<br />
[[File:Landfire_new6.png|700px|center]]<br />
<br />
<br style="clear: both" /><br />
In the next window, you should be able to click <tt>Download</tt> in order to download the GeoTIFF file containing topography. You will obtain the zip file [https://home.chpc.utah.edu/~u6015636/wiki/lf34682161_US_DEM2016.zip lf34682161_US_DEM2016.zip] containing a GeoTIFF file that can be downloaded from [[File:US_DEM2016.tif]] or [https://home.chpc.utah.edu/~u6015636/wiki/US_DEM2016.tif US_DEM2016.tif].<br />
<br />
=Converting fire data=<br />
<br />
This section describes converting data from geotiff to geogrid format. <br />
<br />
In order for geogrid to be able to read this data, we need to convert it into an intermediate format. We will be using a utility program included on [https://github.com/openwfm/wrfxpy wrfxpy] repository. For information on how to obtain and use this tool, see [[How_to_convert_data_for_Geogrid|How to convert data for Geogrid]]. We will go to the wrfxpy installation already obtained and move the geotiff files inside the directory.<br />
<br />
To convert the fuel and elevation data, we will run <br />
./convert_geotiff.sh US_200FBFM13.tif geo_data NFUEL_CAT<br />
./convert_geotiff.sh US_DEM2016.tif geo_data ZSF<br />
<br />
The resulting <tt>geo_data/NFUEL_CAT/index</tt> file is created as follows.<br />
projection = albers_nad83<br />
dx = 30.0<br />
dy = -30.0<br />
truelat1 = 29.5<br />
truelat2 = 45.5<br />
stdlon = -96.0<br />
known_x = 258.0<br />
known_y = 313.0<br />
known_lon = -112.47513542444187<br />
known_lat = 39.725087912688274<br />
row_order = top_bottom<br />
description = "Anderson 13 fire behavior categories"<br />
units = "fuel category"<br />
type = categorical<br />
signed = yes<br />
category_min = 0<br />
category_max = 14<br />
scale_factor = 1.0<br />
wordsize = 2<br />
tile_x = 515<br />
tile_y = 625<br />
tile_z = 1<br />
endian = little<br />
<br />
We have chosen to set the word size to 1 byte because it can represent 256 categories, plenty for this purpose. Notice that the program has changed the number of categories to 14 and uses the last category to indicate that the source data was out of the range 1-13. For the fuel category data, this represents that there is no fuel present, due to a lake, river, road, etc. <br />
<br />
We can check that the projection information entered into the index file is correct, by running the <tt>gdalinfo</tt> binary that is installed with GDAL. In this case, <tt>gdalinfo</tt> tells us that the source file contains the following projection parameters.<br />
<br />
Driver: GTiff/GeoTIFF<br />
Files: US_200FBFM13.tif<br />
Size is 515, 625<br />
Coordinate System is:<br />
PROJCS["USA_Contiguous_Albers_Equal_Area_Conic_USGS_version",<br />
GEOGCS["NAD83",<br />
DATUM["North_American_Datum_1983",<br />
SPHEROID["GRS 1980",6378137,298.2572221010042,<br />
AUTHORITY["EPSG","7019"]],<br />
AUTHORITY["EPSG","6269"]],<br />
PRIMEM["Greenwich",0],<br />
UNIT["degree",0.0174532925199433],<br />
AUTHORITY["EPSG","4269"]],<br />
PROJECTION["Albers_Conic_Equal_Area"],<br />
PARAMETER["standard_parallel_1",29.5],<br />
PARAMETER["standard_parallel_2",45.5],<br />
PARAMETER["latitude_of_center",23],<br />
PARAMETER["longitude_of_center",-96],<br />
PARAMETER["false_easting",0],<br />
PARAMETER["false_northing",0],<br />
UNIT["metre",1,<br />
AUTHORITY["EPSG","9001"]]]<br />
Origin = (-1400235.000000000000000,1986555.000000000000000)<br />
Pixel Size = (30.000000000000000,-30.000000000000000)<br />
Metadata:<br />
AREA_OR_POINT=Area<br />
DataType=Thematic<br />
Image Structure Metadata:<br />
INTERLEAVE=BAND<br />
Corner Coordinates:<br />
Upper Left (-1400235.000, 1986555.000) (112d35' 1.88"W, 39d47'44.01"N)<br />
Lower Left (-1400235.000, 1967805.000) (112d32'44.10"W, 39d37'50.78"N)<br />
Upper Right (-1384785.000, 1986555.000) (112d24'16.21"W, 39d49' 9.72"N)<br />
Lower Right (-1384785.000, 1967805.000) (112d21'59.86"W, 39d39'16.30"N)<br />
Center (-1392510.000, 1977180.000) (112d28'30.49"W, 39d43'30.32"N)<br />
Band 1 Block=128x128 Type=Int16, ColorInterp=Gray<br />
NoData Value=-9999<br />
Metadata:<br />
RepresentationType=THEMATIC <br />
<br />
The resulting <tt>geo_data/ZSF/index</tt> file is created as follows.<br />
<br />
projection = albers_nad83<br />
dx = 30.0<br />
dy = -30.0<br />
truelat1 = 29.5<br />
truelat2 = 45.5<br />
stdlon = -96.0<br />
known_x = 258.0<br />
known_y = 313.0<br />
known_lon = -112.47513542444187<br />
known_lat = 39.725087912688274<br />
row_order = top_bottom<br />
description = "National Elevation Dataset 1/3 arcsecond resolution"<br />
units = "meters"<br />
type = continuous<br />
signed = yes<br />
scale_factor = 1.0<br />
wordsize = 2<br />
tile_x = 515<br />
tile_y = 625<br />
tile_z = 1<br />
endian = little<br />
<br />
Here we have used word size of 2 bytes and a scale factor of 1.0, which can represent any elevation in the world with 1-meter accuracy, which is approximately the accuracy of the source data.<br />
<br />
Again, we compare the projection parameters in the index file with that reported by <tt>gdalinfo</tt> and find that the conversion was correct.<br />
<br />
Driver: GTiff/GeoTIFF<br />
Files: US_DEM2016.tif<br />
Size is 515, 625<br />
Coordinate System is:<br />
PROJCS["USA_Contiguous_Albers_Equal_Area_Conic_USGS_version",<br />
GEOGCS["NAD83",<br />
DATUM["North_American_Datum_1983",<br />
SPHEROID["GRS 1980",6378137,298.2572221010042,<br />
AUTHORITY["EPSG","7019"]],<br />
AUTHORITY["EPSG","6269"]],<br />
PRIMEM["Greenwich",0],<br />
UNIT["degree",0.0174532925199433],<br />
AUTHORITY["EPSG","4269"]],<br />
PROJECTION["Albers_Conic_Equal_Area"],<br />
PARAMETER["standard_parallel_1",29.5],<br />
PARAMETER["standard_parallel_2",45.5],<br />
PARAMETER["latitude_of_center",23],<br />
PARAMETER["longitude_of_center",-96],<br />
PARAMETER["false_easting",0],<br />
PARAMETER["false_northing",0],<br />
UNIT["metre",1,<br />
AUTHORITY["EPSG","9001"]]]<br />
Origin = (-1400235.000000000000000,1986555.000000000000000)<br />
Pixel Size = (30.000000000000000,-30.000000000000000)<br />
Metadata:<br />
AREA_OR_POINT=Area<br />
DataType=Thematic<br />
Image Structure Metadata:<br />
INTERLEAVE=BAND<br />
Corner Coordinates:<br />
Upper Left (-1400235.000, 1986555.000) (112d35' 1.88"W, 39d47'44.01"N)<br />
Lower Left (-1400235.000, 1967805.000) (112d32'44.10"W, 39d37'50.78"N)<br />
Upper Right (-1384785.000, 1986555.000) (112d24'16.21"W, 39d49' 9.72"N)<br />
Lower Right (-1384785.000, 1967805.000) (112d21'59.86"W, 39d39'16.30"N)<br />
Center (-1392510.000, 1977180.000) (112d28'30.49"W, 39d43'30.32"N)<br />
Band 1 Block=128x128 Type=Int16, ColorInterp=Gray<br />
NoData Value=-9999<br />
Metadata:<br />
RepresentationType=THEMATIC<br />
<br />
Finally, the converted data can be found here [https://home.chpc.utah.edu/~u6015636/wiki/geo_data.tar.gz geo_data.tar.gz].<br />
<br />
=Running geogrid=<br />
<br />
The geogrid binary will create a NetCDF file called <tt>geo_em.d01.nc</tt>. This file will contain all of the static data necessary to run your simulation. Before we can run the binary, however, we must tell geogrid what data needs to be in these files, where it can find them, and what kind of preprocessing we want to be done. This information is contained in a run-time configuration file called <tt>GEOGRID.TBL</tt>, which is located in the <tt>geogrid</tt> subdirectory. The file that is released with WPS contains reasonable defaults for the variables defined on the atmospheric grid, but we need to add two additional sections for the two fire grid data sets that we have just created. We will append the <tt>geo_data/GEOGRID.TBL</tt> sections to the file <tt>geogrid/GEOGRID.TBL</tt>.<br />
===============================<br />
name = NFUEL_CAT<br />
dest_type = categorical<br />
interp_option = default:nearest_neighbor+average_16pt+search<br />
abs_path = /absolute/path/to/geo_data/NFUEL_CAT<br />
priority = 1<br />
fill_missing = 14<br />
subgrid = yes<br />
dominant_only = NFUEL_CAT<br />
z_dim_name = fuel_cat<br />
halt_on_missing = no<br />
===============================<br />
name = ZSF<br />
dest_type = continuous<br />
interp_option = default:average_gcell(4.0)+four_pt+average_4pt<br />
abs_path = /absolute/path/to/geo_data/ZSF<br />
priority = 1<br />
fill_missing = 0<br />
smooth_option = smth-desmth_special; smooth_passes=1<br />
subgrid = yes<br />
df_dx = DZDXF<br />
df_dy = DZDYF<br />
halt_on_missing = no<br />
===============================<br />
<br />
For <tt>NFUEL_CAT</tt>, we will use simple nearest-neighbor interpolation, while for <tt>ZSF</tt>, we will use bilinear interpolation with smoothing. Other configurations are possible. See the [https://www2.mmm.ucar.edu/wrf/users/docs/user_guide_v4/v4.2/users_guide_chap3.html#_Description_of_GEOGRID.TBL WPS users guide] for further information. The full table used can be found [http://pastebin.com/kdymq5ff pastebin] or [https://home.chpc.utah.edu/~u6015636/wiki/GEOGRID.TBL GEOGRID.TBL].<br />
<br />
Once we make these changes to the <tt>GEOGRID.TBL</tt> file, and ensure that all of the directories are in the correct place (including the default geogrid dataset at <tt>../../WPS_GEOG</tt>), we can execute the geogrid binary.<br />
<pre>./geogrid.exe</pre><br />
This will create a file called <tt>geo_em.d01.nc</tt> in the current directory, which can be found here, [https://home.chpc.utah.edu/~u6015636/wiki/geogrid_output.tar.gz geogrid_output.tar.gz]. The contents of this file can be viewed using your favorite NetCDF viewer.<br />
<br />
<center><br />
<gallery caption="geo_em.d01.nc" widths="250px" heights="250px" perrow="3" class="center"><br />
File:Nfuel_cat_new.png|The fuel category data interpolated to the model grid.The<br />
File:Zsf_new.png|The high resolution elevation (1/3") data interpolated to the model grid.<br />
File:Hgt_m_new.png|The low resolution elevation data (30") data interpolated to the atmospheric grid<br />
</gallery><br />
</center><br />
Here, we have visualized the fire grid variables, <tt>NFUEL_CAT</tt> and <tt>ZSF</tt>, as well as the <br />
variable <tt>HGT_M</tt>, which is the elevation data used by the atmospheric model. We can compare<br />
<tt>ZSF</tt> and <tt>HGT_M</tt> to verify that our data conversion process worked. The colormaps of these<br />
two pictures have been aligned, so that we can make a quick visual check. As we see, the two images do<br />
have a similar structure and magnitude, but they do seem to suffer some misalignment. Given that <br />
the data came from two different sources, in two different projections, the error is relatively minor. <br />
Because WPS converts between projections in single precision, by default, there is likely a significant <br />
issue with floating point error. We may, in the future, consider making some changes so that this conversion is done in double precision.<br />
<br />
=Obtaining atmospheric data=<br />
<br />
There are a number of datasets available to initialize a WRF real run. The <br />
[https://www2.mmm.ucar.edu/wrf/users/download/free_data.html WRF users page] lists<br />
a few. One challenge in running a fire simulation is finding a dataset of <br />
sufficient resolution. One (relatively) high resolution data source is the<br />
Climate Forecast System (CFS). This is still only 56 km resolution, so<br />
no small scale weather patterns will appear in our simulation. In general, we <br />
will want to run a series of nested domains in order to catch some small scale weather<br />
features; however, we will proceed with a single domain example.<br />
<br />
The CFSR datasets are available at the following website, <br />
[https://www.ncei.noaa.gov/data/climate-forecast-system/access/operational-analysis https://www.ncei.noaa.gov/data/climate-forecast-system/access/operational-analysis].<br />
We will browse to the [https://www.ncei.noaa.gov/data/climate-forecast-system/access/operational-analysis/6-hourly-by-pressure/2018/201809/20180908/ pressure] and [https://www.ncei.noaa.gov/data/climate-forecast-system/access/operational-analysis/6-hourly-flux/2018/201809/20180908/ surface] directory<br />
containing the data for September 08, 2018. Our simulation runs from the hours 00-06 on this <br />
day, so we will download the pressure grib files for hours <br />
[https://www.ncei.noaa.gov/data/climate-forecast-system/access/operational-analysis/6-hourly-by-pressure/2018/201809/20180908/cdas1.t00z.pgrbh00.grib2 00] and<br />
[https://www.ncei.noaa.gov/data/climate-forecast-system/access/operational-analysis/6-hourly-by-pressure/2018/201809/20180908/cdas1.t06z.pgrbh00.grib2 06], and the surface grib files for hours [https://www.ncei.noaa.gov/data/climate-forecast-system/access/operational-analysis/6-hourly-flux/2018/201809/20180908/cdas1.t00z.sfluxgrbf00.grib2 00] and<br />
[https://www.ncei.noaa.gov/data/climate-forecast-system/access/operational-analysis/6-hourly-flux/2018/201809/20180908/cdas1.t06z.sfluxgrbf00.grib2 06].<br />
<br />
You can get these files also from here, [https://home.chpc.utah.edu/~u6015636/wiki/CFSR_20180908_00-06.tar.gz CFSR_20180908_00-06.tar.gz].<br />
<br />
=Running ungrib=<br />
<br />
With the grib files downloaded, we need to process them separately for pressure and surface variables. We need to link the pressure GRIB files into the WPS directory using the script <tt>link_grib.csh</tt>. This script takes as arguments all of the grib files that are needed for the simulation. In this case, we can run the following command in the WPS directory.<br />
<pre>./link_grib.csh <path to>/CFSR_20180908_00-06/pressure/*.grib2</pre><br />
Substitute <path to> with the directory in which you have saved the grib files. This command<br />
creates a series of symbolic links with a predetermined naming sequence to all of the grib files<br />
you pass as arguments. You should now have two new soft links named <tt>GRIBFILE.AAA</tt> and <br />
<tt>GRIBFILE.AAB</tt>.<br />
<br />
With the proper links in place, we need to tell ungrib what they contain. This is done by copying a variable table into the main WPS directory. Several variable tables are distributed with WPS which describe common datasets. You can find these in the directory <tt>WPS/ungrib/Variable_Tables</tt>.<br />
In particular, the file which corresponds to the CFSR grib files is called <tt>Vtable.CFSR</tt>, so <br />
we issue the following command to copy it into the current directory.<br />
<pre>cp ungrib/Variable_Tables/Vtable.CFSR Vtable</pre><br />
We are now ready to run the ungrib executable.<br />
<pre>./ungrib.exe</pre><br />
This will create two files in the current directory named <tt>COLMET:2018-09-08_00</tt> and <tt>COLMET:2018-09-08_06</tt>. We need to change their name before processing surface variables. So<br />
<pre>mv COLMET:2018-09-08_00 COLMET_P:2018-09-08_00<br />
mv COLMET:2018-09-08_06 COLMET_P:2018-09-08_06</pre><br />
and remove the <tt>GRIBFILE.*</tt> files doing<br />
<pre>rm GRIBFILE.*</pre><br />
<br />
Now we can start over for processing surface variables<br />
<pre>./link_grib.csh <path to>/CFSR_20180908_00-06/surface/*.grib2</pre><br />
Substitute <path to> with the directory in which you have saved the grib files. You should now have two new soft links named <tt>GRIBFILE.AAA</tt> and <tt>GRIBFILE.AAB</tt>.<br />
We are now ready to run the ungrib executable again.<br />
<pre>./ungrib.exe</pre><br />
This will create two files in the current directory named <tt>COLMET:2018-09-08_00</tt> and <tt>COLMET:2018-09-08_06</tt>. We need to change their name. So<br />
<pre>mv COLMET:2018-09-08_00 COLMET_S:2018-09-08_00<br />
mv COLMET:2018-09-08_06 COLMET_S:2018-09-08_06</pre><br />
The four files <tt>COLMET_P:2018-09-08_00</tt>, <tt>COLMET_P:2018-09-08_06</tt>, <tt>COLMET_S:2018-09-08_00</tt>, and <tt>COLMET_S:2018-09-08_06</tt> are the resulting files which can be downloaded here, [https://home.chpc.utah.edu/~u6015636/wiki/ungrib_output.tar.gz ungrib_output.tar.gz].<br />
<br />
=Running metgrid=<br />
<br />
Metgrid will take the files created by ungrib and geogrid and combine them into a set of files. At this point, all we need to do is run it.<br />
<pre>./metgrid.exe</pre><br />
This creates two files named <tt>met_em.d01.2018-09-08_00:00:00.nc</tt> and <tt>met_em.d01.2018-09-08_06:00:00.nc</tt>, which you can download here, [https://home.chpc.utah.edu/~u6015636/wiki/metgrid_output.tar.gz metgrid_output.tar.gz].<br />
<br />
=Running WRF-SFIRE=<br />
<br />
We are now finished with all steps involving WPS. All we need to do is copy over the metgrid output<br />
files over to our WRF real run directory at <tt>WRF-SFIRE/test/em_real</tt> and configure our WRF namelist.<br />
We will need to be sure that the domain description in <tt>namelist.input</tt> matches that of <br />
the <tt>namelist.wps</tt> we created previously, otherwise WRF will refuse to run. Pay particular attention<br />
to the start/stop times and the grid sizes. The fire ignition parameters are configured<br />
in the same way as for the ideal case. Relevant portion of the namelist we will use are given below.<br />
<pre>&time_control<br />
run_days = 0<br />
run_hours = 6<br />
run_minutes = 0<br />
run_seconds = 0<br />
start_year = 2018<br />
start_month = 9<br />
start_day = 8<br />
start_hour = 0<br />
start_minute = 0<br />
start_second = 0<br />
end_year = 2018<br />
end_month = 9<br />
end_day = 8<br />
end_hour = 6<br />
end_minute = 0<br />
end_second = 0<br />
interval_seconds = 21600<br />
input_from_file = .true.<br />
history_interval = 30<br />
frames_per_outfile = 1000<br />
restart = .false.<br />
restart_interval = 180<br />
io_form_history = 2<br />
io_form_restart = 2<br />
io_form_input = 2<br />
io_form_boundary = 2<br />
debug_level = 1<br />
/<br />
<br />
&domains<br />
time_step = 0<br />
time_step_fract_num = 1<br />
time_step_fract_den = 2<br />
max_dom = 1<br />
s_we = 1<br />
e_we = 97<br />
s_sn = 1<br />
e_sn = 97<br />
s_vert = 1<br />
e_vert = 41<br />
num_metgrid_levels = 38<br />
num_metgrid_soil_levels = 4<br />
dx = 100<br />
dy = 100<br />
grid_id = 1<br />
parent_id = 1<br />
i_parent_start = 1<br />
j_parent_start = 1<br />
parent_grid_ratio = 1<br />
parent_time_step_ratio = 1<br />
feedback = 1<br />
smooth_option = 0<br />
sr_x = 20<br />
sr_y = 20<br />
sfcp_to_sfcp = .true.<br />
p_top_requested = 10000<br />
/<br />
<br />
&bdy_control<br />
spec_bdy_width = 5<br />
spec_zone = 1<br />
relax_zone = 4<br />
specified = .true.<br />
periodic_x = .false.<br />
symmetric_xs = .false.<br />
symmetric_xe = .false.<br />
open_xs = .false.<br />
open_xe = .false.<br />
periodic_y = .false.<br />
symmetric_ys = .false.<br />
symmetric_ye = .false.<br />
open_ys = .false.<br />
open_ye = .false.<br />
nested = .false.<br />
/</pre><br />
It is worth mentioning the different <tt>ifire</tt> options implemented:<br />
* <tt>ifire = 1</tt>: WRF-SFIRE code up to date<br />
* <tt>ifire = 2</tt>: Fire code from 2012 in WRF with changes at NCAR<br />
Visit [https://github.com/openwfm/WRF-SFIRE/blob/master/README-SFIRE.md README-SFIRE.md] for more details.<br />
<br />
The full namelist used can be found [https://pastebin.com/V0kGcuS5 pastebin] or [https://home.chpc.utah.edu/~u6015636/wiki/namelist.input namelist.input].<br />
<br />
Once the namelist is properly configured we run the WRF real preprocessor.<br />
<pre>./real.exe</pre><br />
This creates the initial and boundary files for the WRF simulation and fills all missing fields<br />
from the grib data with reasonable defaults. The files that it produces are <tt>wrfbdy_d01</tt><br />
and <tt>wrfinput_d01</tt>, which can be downloaded here, [https://home.chpc.utah.edu/~u6015636/wiki/wrf_real_output.tar.gz wrf_real_output.tar.gz].<br />
<br />
To prepare for running the fire model, copy its parameters here:<br />
<pre><br />
cp ../em_fire/hill/namelist.fire .<br />
cp ../em_fire/hill/namelist.fire_emissions .<br />
</pre><br />
Finally, we run the simulation.<br />
<pre>./wrf.exe</pre><br />
The history file for this example can be downloaded here, [https://home.chpc.utah.edu/~u6015636/wiki/wrf_real_history.tar.gz wrf_real_history.tar.gz].<br />
<br />
[[Category:WRF-Fire]]<br />
[[Category:Data]]<br />
[[Category:Howtos|Run WRF-SFIRE with real data]]</div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=How_to_run_WRF-SFIRE_with_real_data&diff=4523How to run WRF-SFIRE with real data2023-10-02T23:55:44Z<p>Afarguell: /* Obtaining data for geogrid */</p>
<hr />
<div>{{historical|Running WRF-SFIRE with real data in the WRFx system}}<br />
Running WRF-SFIRE with real data is a process very similar to running WRF with real data for weather simulations.<br />
The [http://www.mmm.ucar.edu/wrf/users WRF users page] has many <br />
[https://www2.mmm.ucar.edu/wrf/users/docs/user_guide_v4/contents.html documents] and <br />
[http://www2.mmm.ucar.edu/wrf/users/supports/tutorial.html tutorials] outlining this process. The purpose<br />
of this page is to provide a tutorial for using real data with WRF-SFIRE starting from scratch. We begin with a quick outline of the<br />
steps involved including links to the output of each step. The user can use these linked files to start from any step or to verify their own results. Due to platform and compiler differences your output might differ slightly from those provided.<br />
<br />
''This page refers to data sources for the USA only. For other countries, you will need to make appropriate modifications yourself.''<br />
<br />
=Outline=<br />
<br />
# [[How_to_get_WRF-SFIRE|Compile WRF-SFIRE source code]] with target em_real.<br />
# [[#Compiling WPS|Compile WPS]].<br />
# [[#Configuring_the_domain|Configure your domain]].<br />
# [[#Obtaining data for geogrid|Download geogrid datasets]].<br />
# [[#Converting fire data|Converting fire data]].<br />
# [[#Running geogrid|Run the geogrid executable]].<br />
# [[#Obtaining atmospheric data|Download atmospheric data]].<br />
# [[#Running ungrib|Run the ungrib executable]].<br />
# [[#Running metgrid|Run the metgrid executable]].<br />
# [[#Running wrf|Run real.exe and wrf.exe]].<br />
<br />
=Compiling WPS=<br />
<br />
After you have compiled WRF-SFIRE, <code>git clone https://github.com/openwfm/WPS</code> at the same directory level as WRF-SFIRE, change to <code>WPS</code> and run <br />
<code>./configure</code>. This will present you with a list of configuration options similar to those given by WRF.<br />
You will need to chose one with the same compiler that you used to compile WRF-SFIRE. Generally, it is unnecessary to compile WPS with parallel support.<br />
GRIB2 support is only necessary if your atmospheric data source requires it. Once you have chosen a configuration, you can compile with<br />
<pre>./compile >& compile.log</pre><br />
Make sure to check for errors in the log file generated.<br />
<br />
=Configuring the domain=<br />
<br />
The physical domain is configured in the geogrid section of <tt>namelist.wps</tt> in the WPS directory. In this section, you should define<br />
the geographic projection with <tt>map_proj</tt>, <tt>truelat1</tt>, <tt>truelat2</tt>, and <tt>stand_lon</tt>. Available projections<br />
include <tt>'lambert'</tt>, <tt>'polar'</tt>, <tt>'mercator'</tt>, and <tt>'lat-lon'</tt>. The center of the coarse domain is located at <tt>ref_lon</tt> longitude and <tt>ref_lat</tt> latitude. The computational grid is defined by <tt>e_we/e_sn</tt>, the number of (staggered) grid points in the west-east/south-north direction, and the grid resolution is defined by <tt>dx</tt> and <tt>dy</tt> in meters. <br />
We also specify a path to where we will put the static dataset that geogrid will read from, and we specify the highest resolution (.3 arc seconds) that this data is released in.<br />
<br />
<pre>&geogrid<br />
e_we = 97,<br />
e_sn = 97,<br />
geog_data_res = '.3s',<br />
dx = 100,<br />
dy = 100,<br />
map_proj = 'lambert',<br />
ref_lat = 39.728996<br />
ref_lon = -112.48999<br />
truelat1 = 39.5<br />
truelat2 = 39.9<br />
stand_lon = -112.8<br />
geog_data_path = '../WPS_GEOG'<br />
/</pre><br />
<br />
The share section of the WPS namelist defines the fire subgrid refinement in <tt>subgrid_ratio_x</tt> and <tt>subgrid_ratio_y</tt>. This means that the fire grid will be a 20 time refined grid at a resolution of 5 meters by 5 meters. The <tt>start_date</tt> and <tt>end_data</tt> parameters specify the time window that the simulation will be run in. Atmospheric data must be available at both temporal boundaries. The <tt>interval_seconds</tt> parameter tells WPS the number of seconds between each atmospheric dataset. For our example, we will be using the CFSR dataset which is released daily every six hours or 21,600 seconds.<br />
<br />
<pre>&share<br />
wrf_core = 'ARW',<br />
max_dom = 1,<br />
start_date = '2018-09-08_00:00:00',<br />
end_date = '2018-09-08_06:00:00',<br />
interval_seconds = 21600,<br />
io_form_geogrid = 2,<br />
subgrid_ratio_x = 20,<br />
subgrid_ratio_y = 20,<br />
/</pre><br />
The full namelist used can be found in [https://pastebin.com/6rV2Qg8Y pastebin] or [https://home.chpc.utah.edu/~u6015636/wiki/namelist.wps namelist.wps].<br />
<br />
=Obtaining data for geogrid=<br />
<br />
First, you must download and uncompress the standard [https://www2.mmm.ucar.edu/wrf/src/wps_files/geog_high_res_mandatory.tar.gz geogrid input data] as explained [https://www2.mmm.ucar.edu/wrf/users/download/get_sources_wps_geog.html here].<br />
This is a 2.6 GB compressed tarball that uncompresses to around 29 GB. It contains all of the static data that geogrid needs for a standard weather simulation; however, for a WRF-SFIRE simulation we need to fill in two additional fields that are too big to release in a single download for the whole globe. We first need to determine the approximate latitude and longitude bounds for our domain.<br />
<br />
We know the coordinates in the center from the <tt>ref_lon</tt> and <tt>ref_lat</tt> parameters of the namelist. We can estimate the<br />
coordinates of the lower-left corner and upper-right corner by the approximate ratio 9e-6 degrees per meter. So, the lower-left and upper-right corners of our domain are at approximately <br />
<pre>ref_lon ± (97-1)/2*100*9e-6<br />
ref_lat ± (97-1)/2*100*9e-6</pre><br />
Therefore for the purposes of downloading data, we will expand this region to the range -112.55 through -112.4 longitude and 39.65 through 39.8 latitude.<br />
<br />
==Downloading fuel category data==<br />
<br />
For the United States, Anderson 13 fuel category data is available at the [https://landfire.cr.usgs.gov/viewer/viewer.html Landfire] website. Upon opening the national map, click on the <tt>Download Tool</tt> [1] and you will see a menu on the right of the screen. Click on the <tt>LF 2016 Remap (LF_200)</tt>, then <tt>Fuel</tt>, and <tt>us_200 13 Fire Behavior Fuel Models-Anderson</tt> [2]. Finally, click on the <tt>Define Download Area By Coordinates</tt> button [3].<br />
<br />
[[File:Landfire_new1.png|700px|center]]<br />
<br style="clear: both" /><br />
<br />
This will open a new window on the right with a form that lets you key in the longitude and latitude range of your selection. In this window, we will input the coordinates computed earlier [4], and below we will click the <tt>Download Area</tt> button [5].<br />
<br />
[[File:Landfire_new2.png|700px|center]]<br />
<br style="clear: both" /><br />
<br />
In the next window, click on the <tt>Modify</tt> button [6]. This will open a new window listing all of the available data products for the selected region. Make sure only the box next to <tt>US_200 13 Fire Behavior Fuel Models-Anderson</tt> is checked and change the data format from <tt>ArcGRID_with_attribs</tt> to <tt>GeoTIFF_with _attribs</tt>. At the bottom make sure <tt>Maximum size (MB) per piece:</tt> is set to 250. Then go to the bottom of the page and click <tt>Save Changes & Return to Summary</tt>.<br />
[[File:Landfire_new3.png|700px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, click on the <tt>Download</tt> button [7]. The file will be a compressed archive containing, among others, a GeoTIFF file. The name of the file will be different for each request, but in this example we have [https://home.chpc.utah.edu/~u6015636/wiki/lf45409014_US_200FBFM13.zip lf45409014_US_200FBFM13.zip] containing the GeoTIFF file <tt>US_200FBFM13.tif</tt>, which can be found [[File:US_200FBFM13.tif]] or [https://home.chpc.utah.edu/~u6015636/wiki/US_200FBFM13.tif US_200FBFM13.tif].<br />
<br />
[[File:Landfire_new4.png|700px|center]]<br />
<br style="clear: both" /><br />
<br />
==Downloading high resolution elevation data==<br />
<br />
For the United States, elevation data is also available at the [https://landfire.cr.usgs.gov/viewer/viewer.html Landfire] website. Repeat the steps described above for downloading the fuel data, but selecting instead <tt>Topographic</tt> and <tt>us_Elevation</tt>. <br />
<br />
[[File:Landfire_new5.png|700px|center]]<br />
<br />
Again, we key in the coordinates determined before and click the <tt>Download Area</tt> button. <br />
<br style="clear: both" /><br />
In the next window click again <tt>Modify</tt>, make sure only <tt>us_Elevation</tt> is selected, change the format to <tt>Geotiff</tt> and click <tt>Save Changes & Return to Summary</tt><br />
<br />
[[File:Landfire_new6.png|700px|center]]<br />
<br />
<br style="clear: both" /><br />
In the next window, you should be able to click <tt>Download</tt> in order to download the GeoTIFF file containing topography. You will obtain the zip file [https://home.chpc.utah.edu/~u6015636/wiki/lf34682161_US_DEM2016.zip lf34682161_US_DEM2016.zip] containing a GeoTIFF file that can be downloaded from [[File:US_DEM2016.tif]] or [https://home.chpc.utah.edu/~u6015636/wiki/US_DEM2016.tif US_DEM2016.tif].<br />
<br />
=Converting fire data=<br />
<br />
This section describes converting data from geotiff to geogrid format. <br />
<br />
In order for geogrid to be able to read this data, we need to convert it into an intermediate format. We will be using a utility program included on [https://github.com/openwfm/wrfxpy wrfxpy] repository. For information on how to obtain and use this tool, see [[How_to_convert_data_for_Geogrid|How to convert data for Geogrid]]. We will go to the wrfxpy installation already obtained and move the geotiff files inside the directory.<br />
<br />
To convert the fuel and elevation data, we will run <br />
./convert_geotiff.sh US_200FBFM13.tif geo_data NFUEL_CAT<br />
./convert_geotiff.sh US_DEM2016.tif geo_data ZSF<br />
<br />
The resulting <tt>geo_data/NFUEL_CAT/index</tt> file is created as follows.<br />
projection = albers_nad83<br />
dx = 30.0<br />
dy = -30.0<br />
truelat1 = 29.5<br />
truelat2 = 45.5<br />
stdlon = -96.0<br />
known_x = 258.0<br />
known_y = 313.0<br />
known_lon = -112.47513542444187<br />
known_lat = 39.725087912688274<br />
row_order = top_bottom<br />
description = "Anderson 13 fire behavior categories"<br />
units = "fuel category"<br />
type = categorical<br />
signed = yes<br />
category_min = 0<br />
category_max = 14<br />
scale_factor = 1.0<br />
wordsize = 2<br />
tile_x = 515<br />
tile_y = 625<br />
tile_z = 1<br />
endian = little<br />
<br />
We have chosen to set the word size to 1 byte because it can represent 256 categories, plenty for this purpose. Notice that the program has changed the number of categories to 14 and uses the last category to indicate that the source data was out of the range 1-13. For the fuel category data, this represents that there is no fuel present, due to a lake, river, road, etc. <br />
<br />
We can check that the projection information entered into the index file is correct, by running the <tt>gdalinfo</tt> binary that is installed with GDAL. In this case, <tt>gdalinfo</tt> tells us that the source file contains the following projection parameters.<br />
<br />
Driver: GTiff/GeoTIFF<br />
Files: US_200FBFM13.tif<br />
Size is 515, 625<br />
Coordinate System is:<br />
PROJCS["USA_Contiguous_Albers_Equal_Area_Conic_USGS_version",<br />
GEOGCS["NAD83",<br />
DATUM["North_American_Datum_1983",<br />
SPHEROID["GRS 1980",6378137,298.2572221010042,<br />
AUTHORITY["EPSG","7019"]],<br />
AUTHORITY["EPSG","6269"]],<br />
PRIMEM["Greenwich",0],<br />
UNIT["degree",0.0174532925199433],<br />
AUTHORITY["EPSG","4269"]],<br />
PROJECTION["Albers_Conic_Equal_Area"],<br />
PARAMETER["standard_parallel_1",29.5],<br />
PARAMETER["standard_parallel_2",45.5],<br />
PARAMETER["latitude_of_center",23],<br />
PARAMETER["longitude_of_center",-96],<br />
PARAMETER["false_easting",0],<br />
PARAMETER["false_northing",0],<br />
UNIT["metre",1,<br />
AUTHORITY["EPSG","9001"]]]<br />
Origin = (-1400235.000000000000000,1986555.000000000000000)<br />
Pixel Size = (30.000000000000000,-30.000000000000000)<br />
Metadata:<br />
AREA_OR_POINT=Area<br />
DataType=Thematic<br />
Image Structure Metadata:<br />
INTERLEAVE=BAND<br />
Corner Coordinates:<br />
Upper Left (-1400235.000, 1986555.000) (112d35' 1.88"W, 39d47'44.01"N)<br />
Lower Left (-1400235.000, 1967805.000) (112d32'44.10"W, 39d37'50.78"N)<br />
Upper Right (-1384785.000, 1986555.000) (112d24'16.21"W, 39d49' 9.72"N)<br />
Lower Right (-1384785.000, 1967805.000) (112d21'59.86"W, 39d39'16.30"N)<br />
Center (-1392510.000, 1977180.000) (112d28'30.49"W, 39d43'30.32"N)<br />
Band 1 Block=128x128 Type=Int16, ColorInterp=Gray<br />
NoData Value=-9999<br />
Metadata:<br />
RepresentationType=THEMATIC <br />
<br />
The resulting <tt>geo_data/ZSF/index</tt> file is created as follows.<br />
<br />
projection = albers_nad83<br />
dx = 30.0<br />
dy = -30.0<br />
truelat1 = 29.5<br />
truelat2 = 45.5<br />
stdlon = -96.0<br />
known_x = 258.0<br />
known_y = 313.0<br />
known_lon = -112.47513542444187<br />
known_lat = 39.725087912688274<br />
row_order = top_bottom<br />
description = "National Elevation Dataset 1/3 arcsecond resolution"<br />
units = "meters"<br />
type = continuous<br />
signed = yes<br />
scale_factor = 1.0<br />
wordsize = 2<br />
tile_x = 515<br />
tile_y = 625<br />
tile_z = 1<br />
endian = little<br />
<br />
Here we have used word size of 2 bytes and a scale factor of 1.0, which can represent any elevation in the world with 1-meter accuracy, which is approximately the accuracy of the source data.<br />
<br />
Again, we compare the projection parameters in the index file with that reported by <tt>gdalinfo</tt> and find that the conversion was correct.<br />
<br />
Driver: GTiff/GeoTIFF<br />
Files: US_DEM2016.tif<br />
Size is 515, 625<br />
Coordinate System is:<br />
PROJCS["USA_Contiguous_Albers_Equal_Area_Conic_USGS_version",<br />
GEOGCS["NAD83",<br />
DATUM["North_American_Datum_1983",<br />
SPHEROID["GRS 1980",6378137,298.2572221010042,<br />
AUTHORITY["EPSG","7019"]],<br />
AUTHORITY["EPSG","6269"]],<br />
PRIMEM["Greenwich",0],<br />
UNIT["degree",0.0174532925199433],<br />
AUTHORITY["EPSG","4269"]],<br />
PROJECTION["Albers_Conic_Equal_Area"],<br />
PARAMETER["standard_parallel_1",29.5],<br />
PARAMETER["standard_parallel_2",45.5],<br />
PARAMETER["latitude_of_center",23],<br />
PARAMETER["longitude_of_center",-96],<br />
PARAMETER["false_easting",0],<br />
PARAMETER["false_northing",0],<br />
UNIT["metre",1,<br />
AUTHORITY["EPSG","9001"]]]<br />
Origin = (-1400235.000000000000000,1986555.000000000000000)<br />
Pixel Size = (30.000000000000000,-30.000000000000000)<br />
Metadata:<br />
AREA_OR_POINT=Area<br />
DataType=Thematic<br />
Image Structure Metadata:<br />
INTERLEAVE=BAND<br />
Corner Coordinates:<br />
Upper Left (-1400235.000, 1986555.000) (112d35' 1.88"W, 39d47'44.01"N)<br />
Lower Left (-1400235.000, 1967805.000) (112d32'44.10"W, 39d37'50.78"N)<br />
Upper Right (-1384785.000, 1986555.000) (112d24'16.21"W, 39d49' 9.72"N)<br />
Lower Right (-1384785.000, 1967805.000) (112d21'59.86"W, 39d39'16.30"N)<br />
Center (-1392510.000, 1977180.000) (112d28'30.49"W, 39d43'30.32"N)<br />
Band 1 Block=128x128 Type=Int16, ColorInterp=Gray<br />
NoData Value=-9999<br />
Metadata:<br />
RepresentationType=THEMATIC<br />
<br />
Finally, the converted data can be found here [http://math.ucdenver.edu/~farguella/files/wiki/geo_data.tar.gz geo_data.tar.gz].<br />
<br />
=Running geogrid=<br />
<br />
The geogrid binary will create a NetCDF file called <tt>geo_em.d01.nc</tt>. This file will contain all of the static data necessary to run your simulation. Before we can run the binary, however, we must tell geogrid what data needs to be in these files, where it can find them, and what kind of preprocessing we want to be done. This information is contained in a run-time configuration file called <tt>GEOGRID.TBL</tt>, which is located in the <tt>geogrid</tt> subdirectory. The file that is released with WPS contains reasonable defaults for the variables defined on the atmospheric grid, but we need to add two additional sections for the two fire grid data sets that we have just created. We will append the <tt>geo_data/GEOGRID.TBL</tt> sections to the file <tt>geogrid/GEOGRID.TBL</tt>.<br />
===============================<br />
name = NFUEL_CAT<br />
dest_type = categorical<br />
interp_option = default:nearest_neighbor+average_16pt+search<br />
abs_path = /absolute/path/to/geo_data/NFUEL_CAT<br />
priority = 1<br />
fill_missing = 14<br />
subgrid = yes<br />
dominant_only = NFUEL_CAT<br />
z_dim_name = fuel_cat<br />
halt_on_missing = no<br />
===============================<br />
name = ZSF<br />
dest_type = continuous<br />
interp_option = default:average_gcell(4.0)+four_pt+average_4pt<br />
abs_path = /absolute/path/to/geo_data/ZSF<br />
priority = 1<br />
fill_missing = 0<br />
smooth_option = smth-desmth_special; smooth_passes=1<br />
subgrid = yes<br />
df_dx = DZDXF<br />
df_dy = DZDYF<br />
halt_on_missing = no<br />
===============================<br />
<br />
For <tt>NFUEL_CAT</tt>, we will use simple nearest-neighbor interpolation, while for <tt>ZSF</tt>, we will use bilinear interpolation with smoothing. Other configurations are possible. See the [https://www2.mmm.ucar.edu/wrf/users/docs/user_guide_v4/v4.2/users_guide_chap3.html#_Description_of_GEOGRID.TBL WPS users guide] for further information. The full table used can be found [http://pastebin.com/kdymq5ff pastebin] or [http://math.ucdenver.edu/~farguella/files/wiki/GEOGRID.TBL GEOGRID.TBL].<br />
<br />
Once we make these changes to the <tt>GEOGRID.TBL</tt> file, and ensure that all of the directories are in the correct place (including the default geogrid dataset at <tt>../../WPS_GEOG</tt>), we can execute the geogrid binary.<br />
<pre>./geogrid.exe</pre><br />
This will create a file called <tt>geo_em.d01.nc</tt> in the current directory, which can be found here, [http://math.ucdenver.edu/~farguella/files/wiki/geogrid_output.tar.gz geogrid_output.tar.gz]. The contents of this file can be viewed using your favorite NetCDF viewer.<br />
<br />
<center><br />
<gallery caption="geo_em.d01.nc" widths="250px" heights="250px" perrow="3" class="center"><br />
File:Nfuel_cat_new.png|The fuel category data interpolated to the model grid.The<br />
File:Zsf_new.png|The high resolution elevation (1/3") data interpolated to the model grid.<br />
File:Hgt_m_new.png|The low resolution elevation data (30") data interpolated to the atmospheric grid<br />
</gallery><br />
</center><br />
Here, we have visualized the fire grid variables, <tt>NFUEL_CAT</tt> and <tt>ZSF</tt>, as well as the <br />
variable <tt>HGT_M</tt>, which is the elevation data used by the atmospheric model. We can compare<br />
<tt>ZSF</tt> and <tt>HGT_M</tt> to verify that our data conversion process worked. The colormaps of these<br />
two pictures have been aligned, so that we can make a quick visual check. As we see, the two images do<br />
have a similar structure and magnitude, but they do seem to suffer some misalignment. Given that <br />
the data came from two different sources, in two different projections, the error is relatively minor. <br />
Because WPS converts between projections in single precision, by default, there is likely a significant <br />
issue with floating point error. We may, in the future, consider making some changes so that this conversion is done in double precision.<br />
<br />
=Obtaining atmospheric data=<br />
<br />
There are a number of datasets available to initialize a WRF real run. The <br />
[https://www2.mmm.ucar.edu/wrf/users/download/free_data.html WRF users page] lists<br />
a few. One challenge in running a fire simulation is finding a dataset of <br />
sufficient resolution. One (relatively) high resolution data source is the<br />
Climate Forecast System (CFS). This is still only 56 km resolution, so<br />
no small scale weather patterns will appear in our simulation. In general, we <br />
will want to run a series of nested domains in order to catch some small scale weather<br />
features; however, we will proceed with a single domain example.<br />
<br />
The CFSR datasets are available at the following website, <br />
[https://www.ncei.noaa.gov/data/climate-forecast-system/access/operational-analysis https://www.ncei.noaa.gov/data/climate-forecast-system/access/operational-analysis].<br />
We will browse to the [https://www.ncei.noaa.gov/data/climate-forecast-system/access/operational-analysis/6-hourly-by-pressure/2018/201809/20180908/ pressure] and [https://www.ncei.noaa.gov/data/climate-forecast-system/access/operational-analysis/6-hourly-flux/2018/201809/20180908/ surface] directory<br />
containing the data for September 08, 2018. Our simulation runs from the hours 00-06 on this <br />
day, so we will download the pressure grib files for hours <br />
[https://www.ncei.noaa.gov/data/climate-forecast-system/access/operational-analysis/6-hourly-by-pressure/2018/201809/20180908/cdas1.t00z.pgrbh00.grib2 00] and<br />
[https://www.ncei.noaa.gov/data/climate-forecast-system/access/operational-analysis/6-hourly-by-pressure/2018/201809/20180908/cdas1.t06z.pgrbh00.grib2 06], and the surface grib files for hours [https://www.ncei.noaa.gov/data/climate-forecast-system/access/operational-analysis/6-hourly-flux/2018/201809/20180908/cdas1.t00z.sfluxgrbf00.grib2 00] and<br />
[https://www.ncei.noaa.gov/data/climate-forecast-system/access/operational-analysis/6-hourly-flux/2018/201809/20180908/cdas1.t06z.sfluxgrbf00.grib2 06].<br />
<br />
You can get these files also from here, [http://math.ucdenver.edu/~farguella/files/wiki/CFSR_20180908_00-06.tar.gz CFSR_20180908_00-06.tar.gz].<br />
<br />
=Running ungrib=<br />
<br />
With the grib files downloaded, we need to process them separately for pressure and surface variables. We need to link the pressure GRIB files into the WPS directory using the script <tt>link_grib.csh</tt>. This script takes as arguments all of the grib files that are needed for the simulation. In this case, we can run the following command in the WPS directory.<br />
<pre>./link_grib.csh <path to>/CFSR_20180908_00-06/pressure/*.grib2</pre><br />
Substitute <path to> with the directory in which you have saved the grib files. This command<br />
creates a series of symbolic links with a predetermined naming sequence to all of the grib files<br />
you pass as arguments. You should now have two new soft links named <tt>GRIBFILE.AAA</tt> and <br />
<tt>GRIBFILE.AAB</tt>.<br />
<br />
With the proper links in place, we need to tell ungrib what they contain. This is done by copying a variable table into the main WPS directory. Several variable tables are distributed with WPS which describe common datasets. You can find these in the directory <tt>WPS/ungrib/Variable_Tables</tt>.<br />
In particular, the file which corresponds to the CFSR grib files is called <tt>Vtable.CFSR</tt>, so <br />
we issue the following command to copy it into the current directory.<br />
<pre>cp ungrib/Variable_Tables/Vtable.CFSR Vtable</pre><br />
We are now ready to run the ungrib executable.<br />
<pre>./ungrib.exe</pre><br />
This will create two files in the current directory named <tt>COLMET:2018-09-08_00</tt> and <tt>COLMET:2018-09-08_06</tt>. We need to change their name before processing surface variables. So<br />
<pre>mv COLMET:2018-09-08_00 COLMET_P:2018-09-08_00<br />
mv COLMET:2018-09-08_06 COLMET_P:2018-09-08_06</pre><br />
and remove the <tt>GRIBFILE.*</tt> files doing<br />
<pre>rm GRIBFILE.*</pre><br />
<br />
Now we can start over for processing surface variables<br />
<pre>./link_grib.csh <path to>/CFSR_20180908_00-06/surface/*.grib2</pre><br />
Substitute <path to> with the directory in which you have saved the grib files. You should now have two new soft links named <tt>GRIBFILE.AAA</tt> and <tt>GRIBFILE.AAB</tt>.<br />
We are now ready to run the ungrib executable again.<br />
<pre>./ungrib.exe</pre><br />
This will create two files in the current directory named <tt>COLMET:2018-09-08_00</tt> and <tt>COLMET:2018-09-08_06</tt>. We need to change their name. So<br />
<pre>mv COLMET:2018-09-08_00 COLMET_S:2018-09-08_00<br />
mv COLMET:2018-09-08_06 COLMET_S:2018-09-08_06</pre><br />
The four files <tt>COLMET_P:2018-09-08_00</tt>, <tt>COLMET_P:2018-09-08_06</tt>, <tt>COLMET_S:2018-09-08_00</tt>, and <tt>COLMET_S:2018-09-08_06</tt> are the resulting files which can be downloaded here, [http://math.ucdenver.edu/~farguella/files/wiki/ungrib_output.tar.gz ungrib_output.tar.gz].<br />
<br />
=Running metgrid=<br />
<br />
Metgrid will take the files created by ungrib and geogrid and combine them into a set of files. At this point, all we need to do is run it.<br />
<pre>./metgrid.exe</pre><br />
This creates two files named <tt>met_em.d01.2018-09-08_00:00:00.nc</tt> and <tt>met_em.d01.2018-09-08_06:00:00.nc</tt>, which you can download here, [http://math.ucdenver.edu/~farguella/files/wiki/metgrid_output.tar.gz metgrid_output.tar.gz].<br />
<br />
=Running WRF-SFIRE=<br />
<br />
We are now finished with all steps involving WPS. All we need to do is copy over the metgrid output<br />
files over to our WRF real run directory at <tt>WRF-SFIRE/test/em_real</tt> and configure our WRF namelist.<br />
We will need to be sure that the domain description in <tt>namelist.input</tt> matches that of <br />
the <tt>namelist.wps</tt> we created previously, otherwise WRF will refuse to run. Pay particular attention<br />
to the start/stop times and the grid sizes. The fire ignition parameters are configured<br />
in the same way as for the ideal case. Relevant portion of the namelist we will use are given below.<br />
<pre>&time_control<br />
run_days = 0<br />
run_hours = 6<br />
run_minutes = 0<br />
run_seconds = 0<br />
start_year = 2018<br />
start_month = 9<br />
start_day = 8<br />
start_hour = 0<br />
start_minute = 0<br />
start_second = 0<br />
end_year = 2018<br />
end_month = 9<br />
end_day = 8<br />
end_hour = 6<br />
end_minute = 0<br />
end_second = 0<br />
interval_seconds = 21600<br />
input_from_file = .true.<br />
history_interval = 30<br />
frames_per_outfile = 1000<br />
restart = .false.<br />
restart_interval = 180<br />
io_form_history = 2<br />
io_form_restart = 2<br />
io_form_input = 2<br />
io_form_boundary = 2<br />
debug_level = 1<br />
/<br />
<br />
&domains<br />
time_step = 0<br />
time_step_fract_num = 1<br />
time_step_fract_den = 2<br />
max_dom = 1<br />
s_we = 1<br />
e_we = 97<br />
s_sn = 1<br />
e_sn = 97<br />
s_vert = 1<br />
e_vert = 41<br />
num_metgrid_levels = 38<br />
num_metgrid_soil_levels = 4<br />
dx = 100<br />
dy = 100<br />
grid_id = 1<br />
parent_id = 1<br />
i_parent_start = 1<br />
j_parent_start = 1<br />
parent_grid_ratio = 1<br />
parent_time_step_ratio = 1<br />
feedback = 1<br />
smooth_option = 0<br />
sr_x = 20<br />
sr_y = 20<br />
sfcp_to_sfcp = .true.<br />
p_top_requested = 10000<br />
/<br />
<br />
&bdy_control<br />
spec_bdy_width = 5<br />
spec_zone = 1<br />
relax_zone = 4<br />
specified = .true.<br />
periodic_x = .false.<br />
symmetric_xs = .false.<br />
symmetric_xe = .false.<br />
open_xs = .false.<br />
open_xe = .false.<br />
periodic_y = .false.<br />
symmetric_ys = .false.<br />
symmetric_ye = .false.<br />
open_ys = .false.<br />
open_ye = .false.<br />
nested = .false.<br />
/</pre><br />
It is worth mentioning the different <tt>ifire</tt> options implemented:<br />
* <tt>ifire = 1</tt>: WRF-SFIRE code up to date<br />
* <tt>ifire = 2</tt>: Fire code from 2012 in WRF with changes at NCAR<br />
Visit [https://github.com/openwfm/WRF-SFIRE/blob/master/README-SFIRE.md README-SFIRE.md] for more details.<br />
<br />
The full namelist used can be found [https://pastebin.com/V0kGcuS5 pastebin] or [http://math.ucdenver.edu/~farguella/files/wiki/namelist.input namelist.input].<br />
<br />
Once the namelist is properly configured we run the WRF real preprocessor.<br />
<pre>./real.exe</pre><br />
This creates the initial and boundary files for the WRF simulation and fills all missing fields<br />
from the grib data with reasonable defaults. The files that it produces are <tt>wrfbdy_d01</tt><br />
and <tt>wrfinput_d01</tt>, which can be downloaded here, [http://math.ucdenver.edu/~farguella/files/wiki/wrf_real_output.tar.gz wrf_real_output.tar.gz].<br />
<br />
To prepare for running the fire model, copy its parameters here:<br />
<pre><br />
cp ../em_fire/hill/namelist.fire .<br />
cp ../em_fire/hill/namelist.fire_emissions .<br />
</pre><br />
Finally, we run the simulation.<br />
<pre>./wrf.exe</pre><br />
The history file for this example can be downloaded here, [http://math.ucdenver.edu/~farguella/files/wiki/wrf_real_history.tar.gz wrf_real_history.tar.gz].<br />
<br />
[[Category:WRF-Fire]]<br />
[[Category:Data]]<br />
[[Category:Howtos|Run WRF-SFIRE with real data]]</div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=How_to_run_WRF-SFIRE_with_real_data&diff=4522How to run WRF-SFIRE with real data2023-10-02T23:30:03Z<p>Afarguell: /* Configuring the domain */</p>
<hr />
<div>{{historical|Running WRF-SFIRE with real data in the WRFx system}}<br />
Running WRF-SFIRE with real data is a process very similar to running WRF with real data for weather simulations.<br />
The [http://www.mmm.ucar.edu/wrf/users WRF users page] has many <br />
[https://www2.mmm.ucar.edu/wrf/users/docs/user_guide_v4/contents.html documents] and <br />
[http://www2.mmm.ucar.edu/wrf/users/supports/tutorial.html tutorials] outlining this process. The purpose<br />
of this page is to provide a tutorial for using real data with WRF-SFIRE starting from scratch. We begin with a quick outline of the<br />
steps involved including links to the output of each step. The user can use these linked files to start from any step or to verify their own results. Due to platform and compiler differences your output might differ slightly from those provided.<br />
<br />
''This page refers to data sources for the USA only. For other countries, you will need to make appropriate modifications yourself.''<br />
<br />
=Outline=<br />
<br />
# [[How_to_get_WRF-SFIRE|Compile WRF-SFIRE source code]] with target em_real.<br />
# [[#Compiling WPS|Compile WPS]].<br />
# [[#Configuring_the_domain|Configure your domain]].<br />
# [[#Obtaining data for geogrid|Download geogrid datasets]].<br />
# [[#Converting fire data|Converting fire data]].<br />
# [[#Running geogrid|Run the geogrid executable]].<br />
# [[#Obtaining atmospheric data|Download atmospheric data]].<br />
# [[#Running ungrib|Run the ungrib executable]].<br />
# [[#Running metgrid|Run the metgrid executable]].<br />
# [[#Running wrf|Run real.exe and wrf.exe]].<br />
<br />
=Compiling WPS=<br />
<br />
After you have compiled WRF-SFIRE, <code>git clone https://github.com/openwfm/WPS</code> at the same directory level as WRF-SFIRE, change to <code>WPS</code> and run <br />
<code>./configure</code>. This will present you with a list of configuration options similar to those given by WRF.<br />
You will need to chose one with the same compiler that you used to compile WRF-SFIRE. Generally, it is unnecessary to compile WPS with parallel support.<br />
GRIB2 support is only necessary if your atmospheric data source requires it. Once you have chosen a configuration, you can compile with<br />
<pre>./compile >& compile.log</pre><br />
Make sure to check for errors in the log file generated.<br />
<br />
=Configuring the domain=<br />
<br />
The physical domain is configured in the geogrid section of <tt>namelist.wps</tt> in the WPS directory. In this section, you should define<br />
the geographic projection with <tt>map_proj</tt>, <tt>truelat1</tt>, <tt>truelat2</tt>, and <tt>stand_lon</tt>. Available projections<br />
include <tt>'lambert'</tt>, <tt>'polar'</tt>, <tt>'mercator'</tt>, and <tt>'lat-lon'</tt>. The center of the coarse domain is located at <tt>ref_lon</tt> longitude and <tt>ref_lat</tt> latitude. The computational grid is defined by <tt>e_we/e_sn</tt>, the number of (staggered) grid points in the west-east/south-north direction, and the grid resolution is defined by <tt>dx</tt> and <tt>dy</tt> in meters. <br />
We also specify a path to where we will put the static dataset that geogrid will read from, and we specify the highest resolution (.3 arc seconds) that this data is released in.<br />
<br />
<pre>&geogrid<br />
e_we = 97,<br />
e_sn = 97,<br />
geog_data_res = '.3s',<br />
dx = 100,<br />
dy = 100,<br />
map_proj = 'lambert',<br />
ref_lat = 39.728996<br />
ref_lon = -112.48999<br />
truelat1 = 39.5<br />
truelat2 = 39.9<br />
stand_lon = -112.8<br />
geog_data_path = '../WPS_GEOG'<br />
/</pre><br />
<br />
The share section of the WPS namelist defines the fire subgrid refinement in <tt>subgrid_ratio_x</tt> and <tt>subgrid_ratio_y</tt>. This means that the fire grid will be a 20 time refined grid at a resolution of 5 meters by 5 meters. The <tt>start_date</tt> and <tt>end_data</tt> parameters specify the time window that the simulation will be run in. Atmospheric data must be available at both temporal boundaries. The <tt>interval_seconds</tt> parameter tells WPS the number of seconds between each atmospheric dataset. For our example, we will be using the CFSR dataset which is released daily every six hours or 21,600 seconds.<br />
<br />
<pre>&share<br />
wrf_core = 'ARW',<br />
max_dom = 1,<br />
start_date = '2018-09-08_00:00:00',<br />
end_date = '2018-09-08_06:00:00',<br />
interval_seconds = 21600,<br />
io_form_geogrid = 2,<br />
subgrid_ratio_x = 20,<br />
subgrid_ratio_y = 20,<br />
/</pre><br />
The full namelist used can be found in [https://pastebin.com/6rV2Qg8Y pastebin] or [https://home.chpc.utah.edu/~u6015636/wiki/namelist.wps namelist.wps].<br />
<br />
=Obtaining data for geogrid=<br />
<br />
First, you must download and uncompress the standard [https://www2.mmm.ucar.edu/wrf/src/wps_files/geog_high_res_mandatory.tar.gz geogrid input data] as explained [https://www2.mmm.ucar.edu/wrf/users/download/get_sources_wps_geog.html here].<br />
This is a 2.6 GB compressed tarball that uncompresses to around 29 GB. It contains all of the static data that geogrid needs for a standard weather simulation; however, for a WRF-SFIRE simulation we need to fill in two additional fields that are too big to release in a single download for the whole globe. We first need to determine the approximate latitude and longitude bounds for our domain.<br />
<br />
We know the coordinates in the center from the <tt>ref_lon</tt> and <tt>ref_lat</tt> parameters of the namelist. We can estimate the<br />
coordinates of the lower-left corner and upper-right corner by the approximate ratio 9e-6 degrees per meter. So, the lower-left and upper-right corners of our domain are at approximately <br />
<pre>ref_lon ± (97-1)/2*100*9e-6<br />
ref_lat ± (97-1)/2*100*9e-6</pre><br />
Therefore for the purposes of downloading data, we will expand this region to the range -112.55 through -112.4 longitude and 39.65 through 39.8 latitude.<br />
<br />
==Downloading fuel category data==<br />
<br />
For the United States, Anderson 13 fuel category data is available at the [https://landfire.cr.usgs.gov/viewer/viewer.html Landfire] website. Upon opening the national map, click on the <tt>Download Tool</tt> [1] and you will see a menu on the right of the screen. Click on the <tt>LF 2016 Remap (LF_200)</tt>, then <tt>Fuel</tt>, and <tt>us_200 13 Fire Behavior Fuel Models-Anderson</tt> [2]. Finally, click on the <tt>Define Download Area By Coordinates</tt> button [3].<br />
<br />
[[File:Landfire_new1.png|700px|center]]<br />
<br style="clear: both" /><br />
<br />
This will open a new window on the right with a form that lets you key in the longitude and latitude range of your selection. In this window, we will input the coordinates computed earlier [4], and below we will click the <tt>Download Area</tt> button [5].<br />
<br />
[[File:Landfire_new2.png|700px|center]]<br />
<br style="clear: both" /><br />
<br />
In the next window, click on the <tt>Modify</tt> button [6]. This will open a new window listing all of the available data products for the selected region. Make sure only the box next to <tt>US_200 13 Fire Behavior Fuel Models-Anderson</tt> is checked and change the data format from <tt>ArcGRID_with_attribs</tt> to <tt>GeoTIFF_with _attribs</tt>. At the bottom make sure <tt>Maximum size (MB) per piece:</tt> is set to 250. Then go to the bottom of the page and click <tt>Save Changes & Return to Summary</tt>.<br />
[[File:Landfire_new3.png|700px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, click on the <tt>Download</tt> button [7]. The file will be a compressed archive containing, among others, a GeoTIFF file. The name of the file will be different for each request, but in this example we have [http://math.ucdenver.edu/~farguella/files/wiki/lf45409014_US_200FBFM13.zip lf45409014_US_200FBFM13.zip] containing the GeoTIFF file <tt>US_200FBFM13.tif</tt>, which can be found [[File:US_200FBFM13.tif]] or [http://math.ucdenver.edu/~farguella/files/wiki/US_200FBFM13.tif US_200FBFM13.tif].<br />
<br />
[[File:Landfire_new4.png|700px|center]]<br />
<br style="clear: both" /><br />
<br />
==Downloading high resolution elevation data==<br />
<br />
For the United States, elevation data is also available at the [https://landfire.cr.usgs.gov/viewer/viewer.html Landfire] website. Repeat the steps described above for downloading the fuel data, but selecting instead <tt>Topographic</tt> and <tt>us_Elevation</tt>. <br />
<br />
[[File:Landfire_new5.png|700px|center]]<br />
<br />
Again, we key in the coordinates determined before and click the <tt>Download Area</tt> button. <br />
<br style="clear: both" /><br />
In the next window click again <tt>Modify</tt>, make sure only <tt>us_Elevation</tt> is selected, change the format to <tt>Geotiff</tt> and click <tt>Save Changes & Return to Summary</tt><br />
<br />
[[File:Landfire_new6.png|700px|center]]<br />
<br />
<br style="clear: both" /><br />
In the next window, you should be able to click <tt>Download</tt> in order to download the GeoTIFF file containing topography. You will obtain the zip file [http://math.ucdenver.edu/~farguella/files/wiki/lf34682161_US_DEM2016.zip lf34682161_US_DEM2016.zip] containing a GeoTIFF file that can be downloaded from [[File:US_DEM2016.tif]] or [http://math.ucdenver.edu/~farguella/files/wiki/US_DEM2016.tif US_DEM2016.tif].<br />
<br />
=Converting fire data=<br />
<br />
This section describes converting data from geotiff to geogrid format. <br />
<br />
In order for geogrid to be able to read this data, we need to convert it into an intermediate format. We will be using a utility program included on [https://github.com/openwfm/wrfxpy wrfxpy] repository. For information on how to obtain and use this tool, see [[How_to_convert_data_for_Geogrid|How to convert data for Geogrid]]. We will go to the wrfxpy installation already obtained and move the geotiff files inside the directory.<br />
<br />
To convert the fuel and elevation data, we will run <br />
./convert_geotiff.sh US_200FBFM13.tif geo_data NFUEL_CAT<br />
./convert_geotiff.sh US_DEM2016.tif geo_data ZSF<br />
<br />
The resulting <tt>geo_data/NFUEL_CAT/index</tt> file is created as follows.<br />
projection = albers_nad83<br />
dx = 30.0<br />
dy = -30.0<br />
truelat1 = 29.5<br />
truelat2 = 45.5<br />
stdlon = -96.0<br />
known_x = 258.0<br />
known_y = 313.0<br />
known_lon = -112.47513542444187<br />
known_lat = 39.725087912688274<br />
row_order = top_bottom<br />
description = "Anderson 13 fire behavior categories"<br />
units = "fuel category"<br />
type = categorical<br />
signed = yes<br />
category_min = 0<br />
category_max = 14<br />
scale_factor = 1.0<br />
wordsize = 2<br />
tile_x = 515<br />
tile_y = 625<br />
tile_z = 1<br />
endian = little<br />
<br />
We have chosen to set the word size to 1 byte because it can represent 256 categories, plenty for this purpose. Notice that the program has changed the number of categories to 14 and uses the last category to indicate that the source data was out of the range 1-13. For the fuel category data, this represents that there is no fuel present, due to a lake, river, road, etc. <br />
<br />
We can check that the projection information entered into the index file is correct, by running the <tt>gdalinfo</tt> binary that is installed with GDAL. In this case, <tt>gdalinfo</tt> tells us that the source file contains the following projection parameters.<br />
<br />
Driver: GTiff/GeoTIFF<br />
Files: US_200FBFM13.tif<br />
Size is 515, 625<br />
Coordinate System is:<br />
PROJCS["USA_Contiguous_Albers_Equal_Area_Conic_USGS_version",<br />
GEOGCS["NAD83",<br />
DATUM["North_American_Datum_1983",<br />
SPHEROID["GRS 1980",6378137,298.2572221010042,<br />
AUTHORITY["EPSG","7019"]],<br />
AUTHORITY["EPSG","6269"]],<br />
PRIMEM["Greenwich",0],<br />
UNIT["degree",0.0174532925199433],<br />
AUTHORITY["EPSG","4269"]],<br />
PROJECTION["Albers_Conic_Equal_Area"],<br />
PARAMETER["standard_parallel_1",29.5],<br />
PARAMETER["standard_parallel_2",45.5],<br />
PARAMETER["latitude_of_center",23],<br />
PARAMETER["longitude_of_center",-96],<br />
PARAMETER["false_easting",0],<br />
PARAMETER["false_northing",0],<br />
UNIT["metre",1,<br />
AUTHORITY["EPSG","9001"]]]<br />
Origin = (-1400235.000000000000000,1986555.000000000000000)<br />
Pixel Size = (30.000000000000000,-30.000000000000000)<br />
Metadata:<br />
AREA_OR_POINT=Area<br />
DataType=Thematic<br />
Image Structure Metadata:<br />
INTERLEAVE=BAND<br />
Corner Coordinates:<br />
Upper Left (-1400235.000, 1986555.000) (112d35' 1.88"W, 39d47'44.01"N)<br />
Lower Left (-1400235.000, 1967805.000) (112d32'44.10"W, 39d37'50.78"N)<br />
Upper Right (-1384785.000, 1986555.000) (112d24'16.21"W, 39d49' 9.72"N)<br />
Lower Right (-1384785.000, 1967805.000) (112d21'59.86"W, 39d39'16.30"N)<br />
Center (-1392510.000, 1977180.000) (112d28'30.49"W, 39d43'30.32"N)<br />
Band 1 Block=128x128 Type=Int16, ColorInterp=Gray<br />
NoData Value=-9999<br />
Metadata:<br />
RepresentationType=THEMATIC <br />
<br />
The resulting <tt>geo_data/ZSF/index</tt> file is created as follows.<br />
<br />
projection = albers_nad83<br />
dx = 30.0<br />
dy = -30.0<br />
truelat1 = 29.5<br />
truelat2 = 45.5<br />
stdlon = -96.0<br />
known_x = 258.0<br />
known_y = 313.0<br />
known_lon = -112.47513542444187<br />
known_lat = 39.725087912688274<br />
row_order = top_bottom<br />
description = "National Elevation Dataset 1/3 arcsecond resolution"<br />
units = "meters"<br />
type = continuous<br />
signed = yes<br />
scale_factor = 1.0<br />
wordsize = 2<br />
tile_x = 515<br />
tile_y = 625<br />
tile_z = 1<br />
endian = little<br />
<br />
Here we have used word size of 2 bytes and a scale factor of 1.0, which can represent any elevation in the world with 1-meter accuracy, which is approximately the accuracy of the source data.<br />
<br />
Again, we compare the projection parameters in the index file with that reported by <tt>gdalinfo</tt> and find that the conversion was correct.<br />
<br />
Driver: GTiff/GeoTIFF<br />
Files: US_DEM2016.tif<br />
Size is 515, 625<br />
Coordinate System is:<br />
PROJCS["USA_Contiguous_Albers_Equal_Area_Conic_USGS_version",<br />
GEOGCS["NAD83",<br />
DATUM["North_American_Datum_1983",<br />
SPHEROID["GRS 1980",6378137,298.2572221010042,<br />
AUTHORITY["EPSG","7019"]],<br />
AUTHORITY["EPSG","6269"]],<br />
PRIMEM["Greenwich",0],<br />
UNIT["degree",0.0174532925199433],<br />
AUTHORITY["EPSG","4269"]],<br />
PROJECTION["Albers_Conic_Equal_Area"],<br />
PARAMETER["standard_parallel_1",29.5],<br />
PARAMETER["standard_parallel_2",45.5],<br />
PARAMETER["latitude_of_center",23],<br />
PARAMETER["longitude_of_center",-96],<br />
PARAMETER["false_easting",0],<br />
PARAMETER["false_northing",0],<br />
UNIT["metre",1,<br />
AUTHORITY["EPSG","9001"]]]<br />
Origin = (-1400235.000000000000000,1986555.000000000000000)<br />
Pixel Size = (30.000000000000000,-30.000000000000000)<br />
Metadata:<br />
AREA_OR_POINT=Area<br />
DataType=Thematic<br />
Image Structure Metadata:<br />
INTERLEAVE=BAND<br />
Corner Coordinates:<br />
Upper Left (-1400235.000, 1986555.000) (112d35' 1.88"W, 39d47'44.01"N)<br />
Lower Left (-1400235.000, 1967805.000) (112d32'44.10"W, 39d37'50.78"N)<br />
Upper Right (-1384785.000, 1986555.000) (112d24'16.21"W, 39d49' 9.72"N)<br />
Lower Right (-1384785.000, 1967805.000) (112d21'59.86"W, 39d39'16.30"N)<br />
Center (-1392510.000, 1977180.000) (112d28'30.49"W, 39d43'30.32"N)<br />
Band 1 Block=128x128 Type=Int16, ColorInterp=Gray<br />
NoData Value=-9999<br />
Metadata:<br />
RepresentationType=THEMATIC<br />
<br />
Finally, the converted data can be found here [http://math.ucdenver.edu/~farguella/files/wiki/geo_data.tar.gz geo_data.tar.gz].<br />
<br />
=Running geogrid=<br />
<br />
The geogrid binary will create a NetCDF file called <tt>geo_em.d01.nc</tt>. This file will contain all of the static data necessary to run your simulation. Before we can run the binary, however, we must tell geogrid what data needs to be in these files, where it can find them, and what kind of preprocessing we want to be done. This information is contained in a run-time configuration file called <tt>GEOGRID.TBL</tt>, which is located in the <tt>geogrid</tt> subdirectory. The file that is released with WPS contains reasonable defaults for the variables defined on the atmospheric grid, but we need to add two additional sections for the two fire grid data sets that we have just created. We will append the <tt>geo_data/GEOGRID.TBL</tt> sections to the file <tt>geogrid/GEOGRID.TBL</tt>.<br />
===============================<br />
name = NFUEL_CAT<br />
dest_type = categorical<br />
interp_option = default:nearest_neighbor+average_16pt+search<br />
abs_path = /absolute/path/to/geo_data/NFUEL_CAT<br />
priority = 1<br />
fill_missing = 14<br />
subgrid = yes<br />
dominant_only = NFUEL_CAT<br />
z_dim_name = fuel_cat<br />
halt_on_missing = no<br />
===============================<br />
name = ZSF<br />
dest_type = continuous<br />
interp_option = default:average_gcell(4.0)+four_pt+average_4pt<br />
abs_path = /absolute/path/to/geo_data/ZSF<br />
priority = 1<br />
fill_missing = 0<br />
smooth_option = smth-desmth_special; smooth_passes=1<br />
subgrid = yes<br />
df_dx = DZDXF<br />
df_dy = DZDYF<br />
halt_on_missing = no<br />
===============================<br />
<br />
For <tt>NFUEL_CAT</tt>, we will use simple nearest-neighbor interpolation, while for <tt>ZSF</tt>, we will use bilinear interpolation with smoothing. Other configurations are possible. See the [https://www2.mmm.ucar.edu/wrf/users/docs/user_guide_v4/v4.2/users_guide_chap3.html#_Description_of_GEOGRID.TBL WPS users guide] for further information. The full table used can be found [http://pastebin.com/kdymq5ff pastebin] or [http://math.ucdenver.edu/~farguella/files/wiki/GEOGRID.TBL GEOGRID.TBL].<br />
<br />
Once we make these changes to the <tt>GEOGRID.TBL</tt> file, and ensure that all of the directories are in the correct place (including the default geogrid dataset at <tt>../../WPS_GEOG</tt>), we can execute the geogrid binary.<br />
<pre>./geogrid.exe</pre><br />
This will create a file called <tt>geo_em.d01.nc</tt> in the current directory, which can be found here, [http://math.ucdenver.edu/~farguella/files/wiki/geogrid_output.tar.gz geogrid_output.tar.gz]. The contents of this file can be viewed using your favorite NetCDF viewer.<br />
<br />
<center><br />
<gallery caption="geo_em.d01.nc" widths="250px" heights="250px" perrow="3" class="center"><br />
File:Nfuel_cat_new.png|The fuel category data interpolated to the model grid.The<br />
File:Zsf_new.png|The high resolution elevation (1/3") data interpolated to the model grid.<br />
File:Hgt_m_new.png|The low resolution elevation data (30") data interpolated to the atmospheric grid<br />
</gallery><br />
</center><br />
Here, we have visualized the fire grid variables, <tt>NFUEL_CAT</tt> and <tt>ZSF</tt>, as well as the <br />
variable <tt>HGT_M</tt>, which is the elevation data used by the atmospheric model. We can compare<br />
<tt>ZSF</tt> and <tt>HGT_M</tt> to verify that our data conversion process worked. The colormaps of these<br />
two pictures have been aligned, so that we can make a quick visual check. As we see, the two images do<br />
have a similar structure and magnitude, but they do seem to suffer some misalignment. Given that <br />
the data came from two different sources, in two different projections, the error is relatively minor. <br />
Because WPS converts between projections in single precision, by default, there is likely a significant <br />
issue with floating point error. We may, in the future, consider making some changes so that this conversion is done in double precision.<br />
<br />
=Obtaining atmospheric data=<br />
<br />
There are a number of datasets available to initialize a WRF real run. The <br />
[https://www2.mmm.ucar.edu/wrf/users/download/free_data.html WRF users page] lists<br />
a few. One challenge in running a fire simulation is finding a dataset of <br />
sufficient resolution. One (relatively) high resolution data source is the<br />
Climate Forecast System (CFS). This is still only 56 km resolution, so<br />
no small scale weather patterns will appear in our simulation. In general, we <br />
will want to run a series of nested domains in order to catch some small scale weather<br />
features; however, we will proceed with a single domain example.<br />
<br />
The CFSR datasets are available at the following website, <br />
[https://www.ncei.noaa.gov/data/climate-forecast-system/access/operational-analysis https://www.ncei.noaa.gov/data/climate-forecast-system/access/operational-analysis].<br />
We will browse to the [https://www.ncei.noaa.gov/data/climate-forecast-system/access/operational-analysis/6-hourly-by-pressure/2018/201809/20180908/ pressure] and [https://www.ncei.noaa.gov/data/climate-forecast-system/access/operational-analysis/6-hourly-flux/2018/201809/20180908/ surface] directory<br />
containing the data for September 08, 2018. Our simulation runs from the hours 00-06 on this <br />
day, so we will download the pressure grib files for hours <br />
[https://www.ncei.noaa.gov/data/climate-forecast-system/access/operational-analysis/6-hourly-by-pressure/2018/201809/20180908/cdas1.t00z.pgrbh00.grib2 00] and<br />
[https://www.ncei.noaa.gov/data/climate-forecast-system/access/operational-analysis/6-hourly-by-pressure/2018/201809/20180908/cdas1.t06z.pgrbh00.grib2 06], and the surface grib files for hours [https://www.ncei.noaa.gov/data/climate-forecast-system/access/operational-analysis/6-hourly-flux/2018/201809/20180908/cdas1.t00z.sfluxgrbf00.grib2 00] and<br />
[https://www.ncei.noaa.gov/data/climate-forecast-system/access/operational-analysis/6-hourly-flux/2018/201809/20180908/cdas1.t06z.sfluxgrbf00.grib2 06].<br />
<br />
You can get these files also from here, [http://math.ucdenver.edu/~farguella/files/wiki/CFSR_20180908_00-06.tar.gz CFSR_20180908_00-06.tar.gz].<br />
<br />
=Running ungrib=<br />
<br />
With the grib files downloaded, we need to process them separately for pressure and surface variables. We need to link the pressure GRIB files into the WPS directory using the script <tt>link_grib.csh</tt>. This script takes as arguments all of the grib files that are needed for the simulation. In this case, we can run the following command in the WPS directory.<br />
<pre>./link_grib.csh <path to>/CFSR_20180908_00-06/pressure/*.grib2</pre><br />
Substitute <path to> with the directory in which you have saved the grib files. This command<br />
creates a series of symbolic links with a predetermined naming sequence to all of the grib files<br />
you pass as arguments. You should now have two new soft links named <tt>GRIBFILE.AAA</tt> and <br />
<tt>GRIBFILE.AAB</tt>.<br />
<br />
With the proper links in place, we need to tell ungrib what they contain. This is done by copying a variable table into the main WPS directory. Several variable tables are distributed with WPS which describe common datasets. You can find these in the directory <tt>WPS/ungrib/Variable_Tables</tt>.<br />
In particular, the file which corresponds to the CFSR grib files is called <tt>Vtable.CFSR</tt>, so <br />
we issue the following command to copy it into the current directory.<br />
<pre>cp ungrib/Variable_Tables/Vtable.CFSR Vtable</pre><br />
We are now ready to run the ungrib executable.<br />
<pre>./ungrib.exe</pre><br />
This will create two files in the current directory named <tt>COLMET:2018-09-08_00</tt> and <tt>COLMET:2018-09-08_06</tt>. We need to change their name before processing surface variables. So<br />
<pre>mv COLMET:2018-09-08_00 COLMET_P:2018-09-08_00<br />
mv COLMET:2018-09-08_06 COLMET_P:2018-09-08_06</pre><br />
and remove the <tt>GRIBFILE.*</tt> files doing<br />
<pre>rm GRIBFILE.*</pre><br />
<br />
Now we can start over for processing surface variables<br />
<pre>./link_grib.csh <path to>/CFSR_20180908_00-06/surface/*.grib2</pre><br />
Substitute <path to> with the directory in which you have saved the grib files. You should now have two new soft links named <tt>GRIBFILE.AAA</tt> and <tt>GRIBFILE.AAB</tt>.<br />
We are now ready to run the ungrib executable again.<br />
<pre>./ungrib.exe</pre><br />
This will create two files in the current directory named <tt>COLMET:2018-09-08_00</tt> and <tt>COLMET:2018-09-08_06</tt>. We need to change their name. So<br />
<pre>mv COLMET:2018-09-08_00 COLMET_S:2018-09-08_00<br />
mv COLMET:2018-09-08_06 COLMET_S:2018-09-08_06</pre><br />
The four files <tt>COLMET_P:2018-09-08_00</tt>, <tt>COLMET_P:2018-09-08_06</tt>, <tt>COLMET_S:2018-09-08_00</tt>, and <tt>COLMET_S:2018-09-08_06</tt> are the resulting files which can be downloaded here, [http://math.ucdenver.edu/~farguella/files/wiki/ungrib_output.tar.gz ungrib_output.tar.gz].<br />
<br />
=Running metgrid=<br />
<br />
Metgrid will take the files created by ungrib and geogrid and combine them into a set of files. At this point, all we need to do is run it.<br />
<pre>./metgrid.exe</pre><br />
This creates two files named <tt>met_em.d01.2018-09-08_00:00:00.nc</tt> and <tt>met_em.d01.2018-09-08_06:00:00.nc</tt>, which you can download here, [http://math.ucdenver.edu/~farguella/files/wiki/metgrid_output.tar.gz metgrid_output.tar.gz].<br />
<br />
=Running WRF-SFIRE=<br />
<br />
We are now finished with all steps involving WPS. All we need to do is copy over the metgrid output<br />
files over to our WRF real run directory at <tt>WRF-SFIRE/test/em_real</tt> and configure our WRF namelist.<br />
We will need to be sure that the domain description in <tt>namelist.input</tt> matches that of <br />
the <tt>namelist.wps</tt> we created previously, otherwise WRF will refuse to run. Pay particular attention<br />
to the start/stop times and the grid sizes. The fire ignition parameters are configured<br />
in the same way as for the ideal case. Relevant portion of the namelist we will use are given below.<br />
<pre>&time_control<br />
run_days = 0<br />
run_hours = 6<br />
run_minutes = 0<br />
run_seconds = 0<br />
start_year = 2018<br />
start_month = 9<br />
start_day = 8<br />
start_hour = 0<br />
start_minute = 0<br />
start_second = 0<br />
end_year = 2018<br />
end_month = 9<br />
end_day = 8<br />
end_hour = 6<br />
end_minute = 0<br />
end_second = 0<br />
interval_seconds = 21600<br />
input_from_file = .true.<br />
history_interval = 30<br />
frames_per_outfile = 1000<br />
restart = .false.<br />
restart_interval = 180<br />
io_form_history = 2<br />
io_form_restart = 2<br />
io_form_input = 2<br />
io_form_boundary = 2<br />
debug_level = 1<br />
/<br />
<br />
&domains<br />
time_step = 0<br />
time_step_fract_num = 1<br />
time_step_fract_den = 2<br />
max_dom = 1<br />
s_we = 1<br />
e_we = 97<br />
s_sn = 1<br />
e_sn = 97<br />
s_vert = 1<br />
e_vert = 41<br />
num_metgrid_levels = 38<br />
num_metgrid_soil_levels = 4<br />
dx = 100<br />
dy = 100<br />
grid_id = 1<br />
parent_id = 1<br />
i_parent_start = 1<br />
j_parent_start = 1<br />
parent_grid_ratio = 1<br />
parent_time_step_ratio = 1<br />
feedback = 1<br />
smooth_option = 0<br />
sr_x = 20<br />
sr_y = 20<br />
sfcp_to_sfcp = .true.<br />
p_top_requested = 10000<br />
/<br />
<br />
&bdy_control<br />
spec_bdy_width = 5<br />
spec_zone = 1<br />
relax_zone = 4<br />
specified = .true.<br />
periodic_x = .false.<br />
symmetric_xs = .false.<br />
symmetric_xe = .false.<br />
open_xs = .false.<br />
open_xe = .false.<br />
periodic_y = .false.<br />
symmetric_ys = .false.<br />
symmetric_ye = .false.<br />
open_ys = .false.<br />
open_ye = .false.<br />
nested = .false.<br />
/</pre><br />
It is worth mentioning the different <tt>ifire</tt> options implemented:<br />
* <tt>ifire = 1</tt>: WRF-SFIRE code up to date<br />
* <tt>ifire = 2</tt>: Fire code from 2012 in WRF with changes at NCAR<br />
Visit [https://github.com/openwfm/WRF-SFIRE/blob/master/README-SFIRE.md README-SFIRE.md] for more details.<br />
<br />
The full namelist used can be found [https://pastebin.com/V0kGcuS5 pastebin] or [http://math.ucdenver.edu/~farguella/files/wiki/namelist.input namelist.input].<br />
<br />
Once the namelist is properly configured we run the WRF real preprocessor.<br />
<pre>./real.exe</pre><br />
This creates the initial and boundary files for the WRF simulation and fills all missing fields<br />
from the grib data with reasonable defaults. The files that it produces are <tt>wrfbdy_d01</tt><br />
and <tt>wrfinput_d01</tt>, which can be downloaded here, [http://math.ucdenver.edu/~farguella/files/wiki/wrf_real_output.tar.gz wrf_real_output.tar.gz].<br />
<br />
To prepare for running the fire model, copy its parameters here:<br />
<pre><br />
cp ../em_fire/hill/namelist.fire .<br />
cp ../em_fire/hill/namelist.fire_emissions .<br />
</pre><br />
Finally, we run the simulation.<br />
<pre>./wrf.exe</pre><br />
The history file for this example can be downloaded here, [http://math.ucdenver.edu/~farguella/files/wiki/wrf_real_history.tar.gz wrf_real_history.tar.gz].<br />
<br />
[[Category:WRF-Fire]]<br />
[[Category:Data]]<br />
[[Category:Howtos|Run WRF-SFIRE with real data]]</div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=WRF-SFIRE_Tutorial&diff=4521WRF-SFIRE Tutorial2023-09-06T18:19:28Z<p>Afarguell: Created blank page</p>
<hr />
<div></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Namelist.fire&diff=4520Namelist.fire2023-08-24T22:30:17Z<p>Afarguell: </p>
<hr />
<div>{{users guide}}<br />
<br />
<br />
This file serves to redefine the fuel categories if the user wishes to alter default fuel properties.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Variable name<br />
! Description<br />
|-<br />
|'''&fuel_scalars'''<br />
| Scalar fuel constants, common to all fuel categories.<br />
|-<br />
|cmbcnst<br />
| The energy released per unit fuel burned for cellulosic fuels (constant, 1.7433e7 J kg-1).<br />
|-<br />
|hfgl <br />
|The threshold heat flux from a surface fire at which point a canopy fire is ignited above (in W m-2).<br />
|-<br />
|fuelmc_g <br />
|Surface fuel, fuel moisture content (kg/kg, between 0.00 and 1.00).<br />
|-<br />
|fuelmc_c <br />
|Canopy fuel, fuel moisture content (kg/kg, between 0.00 and 1.00).<br />
|-<br />
|nfuelcats<br />
| Number of fuel categories defined (default: 13)<br />
|-<br />
|no_fuel_cat<br />
| The first fuel category to be ignored and taken as ‘no fuel’ (default: 14)<br />
|-<br />
|no_fuel_cat2<br />
| The last fuel category to be ignored and taken as ‘no fuel’ (default: 14). That is, all fuel with categories between no_fuel_cat and no_fuel_cat2 is ignored. Fuel with category 0 is also always ignored.<br />
|-<br />
|-<br />
|'''&fuel_categories'''<br />
| One number per fuel category.<br />
|-<br />
|windrf<br />
| Wind reduction factor from 20ft to midflame height (1)<br />
|-<br />
|fgi <br />
|The initial mass loading of surface fuel (kg m<sup>-2</sup>) in each fuel category<br />
|-<br />
|fueldepthm <br />
|Fuel depth (m)<br />
|-<br />
|savr<br />
| Fuel Surface-area-to-volume-ratio (ft-1)<br />
|-<br />
|fuelmce<br />
| Fuel moisture content of extinction (kg/kg, from 0.00 – 1.00).<br />
|-<br />
|fueldens <br />
|Fuel particle density lb ft<sup>-3</sup> (32 if solid, 19 if rotten)<br />
|-<br />
|st<br />
| Fuel particle total mineral content. (kg minerals/kg wood)<br />
|-<br />
|se<br />
| Fuel particle effective mineral content. (kg minerals – kg silica)/kg wood<br />
|-<br />
|weight<br />
| Weighting parameter that determines the slope of the mass loss curve. This can range from about 5 (fast burnup) to 1000 ( 40% decrease in mass over 10 minutes).<br />
|-<br />
|fci_d <br />
|Initial dry mass loading of canopy fuel (in kg m<sup>-2</sup>)<br />
|-<br />
|fct<br />
| The burnout time of canopy fuel once ignited (s)<br />
|-<br />
|ichap<br />
| Is this a chaparral category to be treated differently using an empirical rate of spread relationship that depends only on windspeed<br />
|-<br />
|<br />
|1: yes, this is a chaparral category and should be treated differently<br />
|-<br />
|<br />
|0: no, this is not a chaparral category or should not be treated differently. <br />
|-<br />
|<br />
|Primarily used for Fuel Category 4.<br />
|-<br />
| fmc_gw01<br />
| The proportion of the fuel that is in moisture class 1 (between 0.0 and 1.0)<br />
|-<br />
| fmc_gw02<br />
| The proportion of the fuel that is in moisture class 2, etc. up to 5. The proportions should add up to 1.<br />
|-<br />
|-<br />
|'''&moisture'''<br />
| The [[fuel moisture model]] namelist.<br />
|-<br />
| moisture_classes<br />
| Number of moisture classes, at most 5. Each fuel consists of a mixture of moisture classes. Following are one number per class.<br />
|-<br />
| is_dead<br />
| Fuel moisture class is dead or live (1 if is dead and 0 if is live)<br />
|-<br />
| drying_model <br />
| Number of the drying model used in each class. At the moment only model 1 is supported. Model parameters follow for each class.<br />
|-<br />
| drying_lag<br />
| Hours it takes to get 64% closer to a equilibrium moisture contents<br />
|-<br />
| wetting_model<br />
| Number of the wetting model used in each class. At the moment only model 1 is supported. Model parameters follow for each class.<br />
|-<br />
| wetting_lag<br />
| Hours of very strong rain it takes to get 64% closer to saturation<br />
|-<br />
| saturation_moisture<br />
| The maximal fuel moisture contents (in kg/kg, default 2.5)<br />
|-<br />
| saturation_rain<br />
| Rain intensity (mm/h) such that the rate of soaking is at 64% of wetting_lag<br />
|- <br />
| rain_treshold<br />
| Rain under this intensity (mm/h) does not count<br />
|-<br />
| fmc_gc_initialization<br />
| Moisture initialization 0 = from WRF input (files '''wrfinput''' or '''wrfrst'''), 1 = from scalar fuelmc_g in namelist.input 2 = from equilibrium<br />
|}<br />
<br />
[[Category:WRF-Fire]]</div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4519Running WRF-SFIRE with real data in the WRFx system2023-08-08T23:42:23Z<p>Afarguell: /* GOES data */</p>
<hr />
<div>[[Category:WRF-Fire|User's guide]]<br />
[[Category:WRF-SFIRE users guide]]<br />
[[Category:Howtos|Set up WRFx]]<br />
{{users guide}}<br />
<br />
Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv HD5 /path/to/hdf5<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
HD5 is optional, but without it, configure have you use uncompressed netcdf files.<br />
<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
<br />
WRF expects $NETCDF and $HD5 to have subdirectories include, lib, and modules. <br />
To use system-installed netcdf and hdf5, <br />
on a system that uses standard /usr/lib (such as Ubuntu), you may be able to use simply<br />
setenv NETCDF /usr<br />
setenv HD5 /usr<br />
On a Linux that uses /usr/lib64 (such as Redhat and Centos), make a directory with the links<br />
include -> /usr/include<br />
lib -> /usr/lib64<br />
modules -> /usr/lib64/gfortran/modules<br />
and point NETCDF and HD5 to it.<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure CHEM (optional)===<br />
setenv WRF_CHEM 1<br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available.<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
To be able to run real problems, compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
export WRF_DIR=/path/to/WRF-SFIRE<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains land use, elevation, soil type data, etc for WRF (geogrid.exe to be specific).<br />
cd ..<br />
wget <nowiki>https://demo.openwfm.org/web/wrfx/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive conda] distribution for your platform. <br />
We recommend an installation into the users' home directory. For example,<br />
wget <nowiki>https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh</nowiki><br />
chmod +x Miniconda3-latest-Linux-x86_64.sh<br />
./Miniconda3-latest-Linux-x86_64.sh<br />
The installation may instruct you to exit and log in again.<br />
<br />
On a shared system, you may have a system-wide Python distribution with conda already installed, perhaps as a module, try module avail.<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install prerequisites:<br />
wget https://demo.openwfm.org/web/wrfx/wrfx.yml<br />
conda create -n wrfx -f wrfx.yml<br />
Note: the versions listed in the yml file may not be available on platforms other than Linux x86-64 (most common). Then you can try to have conda find the versions and do instead:<br />
conda env create -n wrfx python=3.8<br />
conda install -c conda-forge gdal<br />
conda install -c conda-forge netcdf4 h5py pyhdf pygrib f90nml lxml simplekml pytz pandas scipy<br />
conda install -c conda-forge basemap paramiko dill psutil flask<br />
pip install MesoPy python-cmr shapely==2<br />
<br />
===Set environment===<br />
Every time before using WRFx, make the packages available by<br />
conda activate wrfx<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://mesowest.utah.edu/ MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token for some [https://earthdata.nasa.gov/ Earthdata] data centers. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-synopticdata.com",<br />
"ladds" : "token-from-laads",<br />
"nrt" : "token-from-lance"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running the fuel moisture model, a new MesoWest user can be created in [https://developers.synopticdata.com/ MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. Also, the user can specify a list of tokens to use.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the tokens from the respective data centers can be acquired and replaced in the etc/tokens.json file ([https://ladsweb.modaps.eosdis.nasa.gov/profile/#generate-token LAADS] and [https://nrt3.modaps.eosdis.nasa.gov/profile/app-keys LANCE]). There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====AWS acquisition====<br />
<br />
For getting GOES16 and GOES17 data and as an optional acquisition method for GRIB files, the system is using [https://aws.amazon.com/cli/ AWS Command Line Interface]. So, you would need to have it installed. To look if you have already installed it, you can just type<br />
<br />
aws help<br />
<br />
If the command is not found, you can follow installation instructions from [https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html here]. If you are using Linux, you can do:<br />
<br />
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />
unzip awscliv2.zip<br />
./aws/install -i /path/to/lib -b /path/to/bin<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfx<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
This will generate the experiment in the path specified in the etc/conf.json file and under a workspace subdirectory created from the experiment name, submit the job to your batch scheduler, and postprocess results and send them to your installation of wrfxweb. If you do not have wrfxweb, no worries, you can always get the files of the WRF-SFIRE run in subdirectory wrf of your experiment directotry. You can also inspect the files generated, modify them, and resubmit the job.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run a serial real.exe. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE:<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE WRF-SFIRE-serial</nowiki><br />
cd WRF-SFIRE-serial<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
wrfxweb is a web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb requirements===<br />
<br />
wrfxweb runs in a regular user account on a Linux server equipped with a web server. You need to be able to <br />
* transfer files and execute remote commands on the machine by passwordless ssh with key authentication without a passkey<br />
* access the directory wrfxweb/fdds in your account from the web<br />
<br />
===wrfxweb: server setup===<br />
<br />
You can set up your own server. We are using Ubuntu Linux with nginx web server, but other software should work too. Configuring the web server to use https is recommended. The resource requirements are modest, 2 cores and 4GB memory are more than sufficient. Simulations can be large, easily several GB each, so provision sufficient disk space. <br />
<br />
We can provide a limited amount of resources on our demo server to collaborators. To use our server, first make an ssh key on the machine where you run wrfxpy:<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Then send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
If your request is approved, you will be able to ssh to the demo server without any password.<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following keys in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
"organization": "Organization Name"<br />
"flags": ["Flag 1", "Flag 2", ...]<br />
<br />
If no flags are required, one can specify an empty list or remove the key.<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. They default to wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4518Running WRF-SFIRE with real data in the WRFx system2023-08-07T18:56:37Z<p>Afarguell: /* Configure WPS */</p>
<hr />
<div>[[Category:WRF-Fire|User's guide]]<br />
[[Category:WRF-SFIRE users guide]]<br />
[[Category:Howtos|Set up WRFx]]<br />
{{users guide}}<br />
<br />
Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv HD5 /path/to/hdf5<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
HD5 is optional, but without it, configure have you use uncompressed netcdf files.<br />
<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
<br />
WRF expects $NETCDF and $HD5 to have subdirectories include, lib, and modules. <br />
To use system-installed netcdf and hdf5, <br />
on a system that uses standard /usr/lib (such as Ubuntu), you may be able to use simply<br />
setenv NETCDF /usr<br />
setenv HD5 /usr<br />
On a Linux that uses /usr/lib64 (such as Redhat and Centos), make a directory with the links<br />
include -> /usr/include<br />
lib -> /usr/lib64<br />
modules -> /usr/lib64/gfortran/modules<br />
and point NETCDF and HD5 to it.<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure CHEM (optional)===<br />
setenv WRF_CHEM 1<br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available.<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
To be able to run real problems, compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
export WRF_DIR=/path/to/WRF-SFIRE<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains land use, elevation, soil type data, etc for WRF (geogrid.exe to be specific).<br />
cd ..<br />
wget <nowiki>https://demo.openwfm.org/web/wrfx/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive conda] distribution for your platform. <br />
We recommend an installation into the users' home directory. For example,<br />
wget <nowiki>https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh</nowiki><br />
chmod +x Miniconda3-latest-Linux-x86_64.sh<br />
./Miniconda3-latest-Linux-x86_64.sh<br />
The installation may instruct you to exit and log in again.<br />
<br />
On a shared system, you may have a system-wide Python distribution with conda already installed, perhaps as a module, try module avail.<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install prerequisites:<br />
wget https://demo.openwfm.org/web/wrfx/wrfx.yml<br />
conda create -n wrfx -f wrfx.yml<br />
Note: the versions listed in the yml file may not be available on platforms other than Linux x86-64 (most common). Then you can try to have conda find the versions and do instead:<br />
conda env create -n wrfx python=3.8<br />
conda install -c conda-forge gdal<br />
conda install -c conda-forge netcdf4 h5py pyhdf pygrib f90nml lxml simplekml pytz pandas scipy<br />
conda install -c conda-forge basemap paramiko dill psutil flask<br />
pip install MesoPy python-cmr shapely==2<br />
<br />
===Set environment===<br />
Every time before using WRFx, make the packages available by<br />
conda activate wrfx<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://mesowest.utah.edu/ MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token for some [https://earthdata.nasa.gov/ Earthdata] data centers. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-synopticdata.com",<br />
"ladds" : "token-from-laads",<br />
"nrt" : "token-from-lance"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running the fuel moisture model, a new MesoWest user can be created in [https://developers.synopticdata.com/ MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. Also, the user can specify a list of tokens to use.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the tokens from the respective data centers can be acquired and replaced in the etc/tokens.json file ([https://ladsweb.modaps.eosdis.nasa.gov/profile/#generate-token LAADS] and [https://nrt3.modaps.eosdis.nasa.gov/profile/app-keys LANCE]). There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====GOES data====<br />
<br />
For getting GOES16 and GOES17 data, the system is using [https://aws.amazon.com/cli/ AWS Command Line Interface]. So, you would need to have it installed. To look if you have already installed it, you can just type<br />
<br />
aws help<br />
<br />
If the command is not found, you can follow installation instructions from [https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html here]. If you are using Linux, you can do:<br />
<br />
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />
unzip awscliv2.zip<br />
./aws/install -i /path/to/lib -b /path/to/bin<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfx<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
This will generate the experiment in the path specified in the etc/conf.json file and under a workspace subdirectory created from the experiment name, submit the job to your batch scheduler, and postprocess results and send them to your installation of wrfxweb. If you do not have wrfxweb, no worries, you can always get the files of the WRF-SFIRE run in subdirectory wrf of your experiment directotry. You can also inspect the files generated, modify them, and resubmit the job.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run a serial real.exe. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE:<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE WRF-SFIRE-serial</nowiki><br />
cd WRF-SFIRE-serial<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
wrfxweb is a web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb requirements===<br />
<br />
wrfxweb runs in a regular user account on a Linux server equipped with a web server. You need to be able to <br />
* transfer files and execute remote commands on the machine by passwordless ssh with key authentication without a passkey<br />
* access the directory wrfxweb/fdds in your account from the web<br />
<br />
===wrfxweb: server setup===<br />
<br />
You can set up your own server. We are using Ubuntu Linux with nginx web server, but other software should work too. Configuring the web server to use https is recommended. The resource requirements are modest, 2 cores and 4GB memory are more than sufficient. Simulations can be large, easily several GB each, so provision sufficient disk space. <br />
<br />
We can provide a limited amount of resources on our demo server to collaborators. To use our server, first make an ssh key on the machine where you run wrfxpy:<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Then send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
If your request is approved, you will be able to ssh to the demo server without any password.<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following keys in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
"organization": "Organization Name"<br />
"flags": ["Flag 1", "Flag 2", ...]<br />
<br />
If no flags are required, one can specify an empty list or remove the key.<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. They default to wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4517Running WRF-SFIRE with real data in the WRFx system2023-08-03T21:31:07Z<p>Afarguell: /* Configure WPS */</p>
<hr />
<div>[[Category:WRF-Fire|User's guide]]<br />
[[Category:WRF-SFIRE users guide]]<br />
[[Category:Howtos|Set up WRFx]]<br />
{{users guide}}<br />
<br />
Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv HD5 /path/to/hdf5<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
HD5 is optional, but without it, configure have you use uncompressed netcdf files.<br />
<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
<br />
WRF expects $NETCDF and $HD5 to have subdirectories include, lib, and modules. <br />
To use system-installed netcdf and hdf5, <br />
on a system that uses standard /usr/lib (such as Ubuntu), you may be able to use simply<br />
setenv NETCDF /usr<br />
setenv HD5 /usr<br />
On a Linux that uses /usr/lib64 (such as Redhat and Centos), make a directory with the links<br />
include -> /usr/include<br />
lib -> /usr/lib64<br />
modules -> /usr/lib64/gfortran/modules<br />
and point NETCDF and HD5 to it.<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure CHEM (optional)===<br />
setenv WRF_CHEM 1<br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available.<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
To be able to run real problems, compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
export WRF_DIR=/path/to/WRF-SFIRE<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains land use, elevation, soil type data, etc for WRF (geogrid.exe to be specific).<br />
cd ..<br />
wget <nowiki>https://demo.openwfm.org/web/wrfx/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive conda] distribution for your platform. <br />
We recommend an installation into the users' home directory. For example,<br />
wget <nowiki>https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh</nowiki><br />
chmod +x Miniconda3-latest-Linux-x86_64.sh<br />
./Miniconda3-latest-Linux-x86_64.sh<br />
The installation may instruct you to exit and log in again.<br />
<br />
On a shared system, you may have a system-wide Python distribution with conda already installed, perhaps as a module, try module avail.<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install prerequisites:<br />
wget https://demo.openwfm.org/web/wrfx/wrfx.yml<br />
conda create -n wrfx -f wrfx.yml<br />
Note: the versions listed in the yml file may not be available on platforms other than Linux x86-64 (most common). Then you can try to have conda find the versions and do instead:<br />
conda env create -n wrfx python=3.8<br />
conda install -c conda-forge gdal<br />
conda install -c conda-forge netcdf4 h5py pyhdf pygrib f90nml lxml simplekml pytz pandas scipy<br />
conda install -c conda-forge basemap paramiko dill psutil flask<br />
pip install MesoPy python-cmr shapely==2<br />
<br />
===Set environment===<br />
Every time before using WRFx, make the packages available by<br />
conda activate wrfx<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://mesowest.utah.edu/ MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token for some [https://earthdata.nasa.gov/ Earthdata] data centers. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-synopticdata.com",<br />
"ladds" : "token-from-laads",<br />
"nrt" : "token-from-lance"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running the fuel moisture model, a new MesoWest user can be created in [https://developers.synopticdata.com/ MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. Also, the user can specify a list of tokens to use.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the tokens from the respective data centers can be acquired and replaced in the etc/tokens.json file ([https://ladsweb.modaps.eosdis.nasa.gov/profile/#generate-token LAADS] and [https://nrt3.modaps.eosdis.nasa.gov/profile/app-keys LANCE]). There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====GOES data====<br />
<br />
For getting GOES16 and GOES17 data, the system is using [https://aws.amazon.com/cli/ AWS Command Line Interface]. So, you would need to have it installed. To look if you have already installed it, you can just type<br />
<br />
aws help<br />
<br />
If the command is not found, you can follow installation instructions from [https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html here]. If you are using Linux, you can do:<br />
<br />
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />
unzip awscliv2.zip<br />
./aws/install -i /path/to/lib -b /path/to/bin<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfx<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
This will generate the experiment in the path specified in the etc/conf.json file and under a workspace subdirectory created from the experiment name, submit the job to your batch scheduler, and postprocess results and send them to your installation of wrfxweb. If you do not have wrfxweb, no worries, you can always get the files of the WRF-SFIRE run in subdirectory wrf of your experiment directotry. You can also inspect the files generated, modify them, and resubmit the job.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run a serial real.exe. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE:<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE WRF-SFIRE-serial</nowiki><br />
cd WRF-SFIRE-serial<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
wrfxweb is a web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb requirements===<br />
<br />
wrfxweb runs in a regular user account on a Linux server equipped with a web server. You need to be able to <br />
* transfer files and execute remote commands on the machine by passwordless ssh with key authentication without a passkey<br />
* access the directory wrfxweb/fdds in your account from the web<br />
<br />
===wrfxweb: server setup===<br />
<br />
You can set up your own server. We are using Ubuntu Linux with nginx web server, but other software should work too. Configuring the web server to use https is recommended. The resource requirements are modest, 2 cores and 4GB memory are more than sufficient. Simulations can be large, easily several GB each, so provision sufficient disk space. <br />
<br />
We can provide a limited amount of resources on our demo server to collaborators. To use our server, first make an ssh key on the machine where you run wrfxpy:<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Then send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
If your request is approved, you will be able to ssh to the demo server without any password.<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following keys in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
"organization": "Organization Name"<br />
"flags": ["Flag 1", "Flag 2", ...]<br />
<br />
If no flags are required, one can specify an empty list or remove the key.<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. They default to wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4493Running WRF-SFIRE with real data in the WRFx system2023-06-15T20:42:05Z<p>Afarguell: /* Install necessary packages */</p>
<hr />
<div>[[Category:WRF-Fire|User's guide]]<br />
[[Category:WRF-SFIRE users guide]]<br />
[[Category:Howtos|Set up WRFx]]<br />
{{users guide}}<br />
<br />
Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure CHEM (optional)===<br />
setenv WRF_CHEM 1<br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available.<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains landuse, elevation, soiltype data, etc for WRF (geogrid.exe to be specific).<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.anaconda.com/archive/Anaconda3-2021.11-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2021.11-Linux-x86_64.sh<br />
./Anaconda3-2021.11-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda create -n wrfx python=3.8<br />
conda install -c conda-forge gdal<br />
conda install -c conda-forge netcdf4 h5py pyhdf pygrib f90nml lxml simplekml pytz pandas scipy<br />
conda install -c conda-forge basemap paramiko dill psutil flask<br />
pip install MesoPy python-cmr shapely==2<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution. Also, the third line can take some time to resolve because of package compatibilities.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://mesowest.utah.edu/ MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token for some [https://earthdata.nasa.gov/ Earthdata] data centers. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"ladds" : "token-from-laads",<br />
"nrt" : "token-from-lance"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running the fuel moisture model, a new MesoWest user can be created in [https://developers.synopticdata.com/ MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. Also, the user can specify a list of tokens to use.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the tokens from the respective data centers can be acquired and replaced in the etc/tokens.json file ([https://ladsweb.modaps.eosdis.nasa.gov/profile/#generate-token LAADS] and [https://nrt3.modaps.eosdis.nasa.gov/profile/app-keys LANCE]). There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====GOES data====<br />
<br />
For getting GOES16 and GOES17 data, the system is using [https://aws.amazon.com/cli/ AWS Command Line Interface]. So, you would need to have it installed. To look if you have already installed it, you can just type<br />
<br />
aws help<br />
<br />
If the command is not found, you can follow installation instructions from [https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html here]. If you are using Linux, you can do:<br />
<br />
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />
unzip awscliv2.zip<br />
./aws/install -i /path/to/lib -b /path/to/bin<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfx<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run a serial real.exe. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE:<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
wrfxweb is a web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb requirements===<br />
<br />
wrfxweb runs in a regular user account on a Linux server equipped with a web server. You need to be able to <br />
* transfer files and execute remote commands on the machine by passwordless ssh with key authentication without a passkey<br />
* access the directory wrfxweb/fdds in your account from the web<br />
<br />
===wrfxweb: server setup===<br />
<br />
You can set up your own server. We are using Ubuntu Linux with nginx web server, but other software should work too. Configuring the web server to use https is recommended. The resource requirements are modest, 2 cores and 4GB memory are more than sufficient. Simulations can be large, easily several GB each, so provision sufficient disk space. <br />
<br />
We can provide a limited amount of resources on our demo server to collaborators. To use our server, first make an ssh key on the machine where you run wrfxpy:<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Then send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
If your request is approved, you will be able to ssh to the demo server without any password.<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following keys in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
"organization": "Organization Name"<br />
"flags": ["Flag 1", "Flag 2", ...]<br />
<br />
If no flags are required, one can specify an empty list or remove the key.<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. They default to wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4492Running WRF-SFIRE with real data in the WRFx system2023-06-15T20:41:30Z<p>Afarguell: /* Install necessary packages */</p>
<hr />
<div>[[Category:WRF-Fire|User's guide]]<br />
[[Category:WRF-SFIRE users guide]]<br />
[[Category:Howtos|Set up WRFx]]<br />
{{users guide}}<br />
<br />
Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure CHEM (optional)===<br />
setenv WRF_CHEM 1<br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available.<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains landuse, elevation, soiltype data, etc for WRF (geogrid.exe to be specific).<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.anaconda.com/archive/Anaconda3-2021.11-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2021.11-Linux-x86_64.sh<br />
./Anaconda3-2021.11-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda create -n wrfx python=3.8<br />
conda install -c conda-forge gdal<br />
conda install -c conda-forge netcdf4 h5py pyhdf pygrib f90nml lxml simplekml pytz pandas scipy<br />
conda install -c conda-forge basemap paramiko dill psutil flask<br />
pip install MesoPy python-cmr shapely==2<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://mesowest.utah.edu/ MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token for some [https://earthdata.nasa.gov/ Earthdata] data centers. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"ladds" : "token-from-laads",<br />
"nrt" : "token-from-lance"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running the fuel moisture model, a new MesoWest user can be created in [https://developers.synopticdata.com/ MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. Also, the user can specify a list of tokens to use.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the tokens from the respective data centers can be acquired and replaced in the etc/tokens.json file ([https://ladsweb.modaps.eosdis.nasa.gov/profile/#generate-token LAADS] and [https://nrt3.modaps.eosdis.nasa.gov/profile/app-keys LANCE]). There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====GOES data====<br />
<br />
For getting GOES16 and GOES17 data, the system is using [https://aws.amazon.com/cli/ AWS Command Line Interface]. So, you would need to have it installed. To look if you have already installed it, you can just type<br />
<br />
aws help<br />
<br />
If the command is not found, you can follow installation instructions from [https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html here]. If you are using Linux, you can do:<br />
<br />
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />
unzip awscliv2.zip<br />
./aws/install -i /path/to/lib -b /path/to/bin<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfx<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run a serial real.exe. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE:<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
wrfxweb is a web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb requirements===<br />
<br />
wrfxweb runs in a regular user account on a Linux server equipped with a web server. You need to be able to <br />
* transfer files and execute remote commands on the machine by passwordless ssh with key authentication without a passkey<br />
* access the directory wrfxweb/fdds in your account from the web<br />
<br />
===wrfxweb: server setup===<br />
<br />
You can set up your own server. We are using Ubuntu Linux with nginx web server, but other software should work too. Configuring the web server to use https is recommended. The resource requirements are modest, 2 cores and 4GB memory are more than sufficient. Simulations can be large, easily several GB each, so provision sufficient disk space. <br />
<br />
We can provide a limited amount of resources on our demo server to collaborators. To use our server, first make an ssh key on the machine where you run wrfxpy:<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Then send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
If your request is approved, you will be able to ssh to the demo server without any password.<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following keys in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
"organization": "Organization Name"<br />
"flags": ["Flag 1", "Flag 2", ...]<br />
<br />
If no flags are required, one can specify an empty list or remove the key.<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. They default to wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4491Running WRF-SFIRE with real data in the WRFx system2023-06-15T17:35:11Z<p>Afarguell: /* Install necessary packages */</p>
<hr />
<div>[[Category:WRF-Fire|User's guide]]<br />
[[Category:WRF-SFIRE users guide]]<br />
[[Category:Howtos|Set up WRFx]]<br />
{{users guide}}<br />
<br />
Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure CHEM (optional)===<br />
setenv WRF_CHEM 1<br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available.<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains landuse, elevation, soiltype data, etc for WRF (geogrid.exe to be specific).<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.anaconda.com/archive/Anaconda3-2021.11-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2021.11-Linux-x86_64.sh<br />
./Anaconda3-2021.11-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda create -n wrfx python=3 <br />
conda activate wrfx<br />
conda install -c conda-forge netcdf4 h5py pyhdf pygrib f90nml lxml simplekml scipy pyproj gdal rasterio <br />
conda install -c conda-forge matplotlib basemap paramiko dill psutil flask pytz pandas<br />
pip install MesoPy python-cmr shapely==2<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://mesowest.utah.edu/ MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token for some [https://earthdata.nasa.gov/ Earthdata] data centers. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"ladds" : "token-from-laads",<br />
"nrt" : "token-from-lance"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running the fuel moisture model, a new MesoWest user can be created in [https://developers.synopticdata.com/ MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. Also, the user can specify a list of tokens to use.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the tokens from the respective data centers can be acquired and replaced in the etc/tokens.json file ([https://ladsweb.modaps.eosdis.nasa.gov/profile/#generate-token LAADS] and [https://nrt3.modaps.eosdis.nasa.gov/profile/app-keys LANCE]). There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====GOES data====<br />
<br />
For getting GOES16 and GOES17 data, the system is using [https://aws.amazon.com/cli/ AWS Command Line Interface]. So, you would need to have it installed. To look if you have already installed it, you can just type<br />
<br />
aws help<br />
<br />
If the command is not found, you can follow installation instructions from [https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html here]. If you are using Linux, you can do:<br />
<br />
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />
unzip awscliv2.zip<br />
./aws/install -i /path/to/lib -b /path/to/bin<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfx<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run a serial real.exe. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE:<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
wrfxweb is a web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb requirements===<br />
<br />
wrfxweb runs in a regular user account on a Linux server equipped with a web server. You need to be able to <br />
* transfer files and execute remote commands on the machine by passwordless ssh with key authentication without a passkey<br />
* access the directory wrfxweb/fdds in your account from the web<br />
<br />
===wrfxweb: server setup===<br />
<br />
You can set up your own server. We are using Ubuntu Linux with nginx web server, but other software should work too. Configuring the web server to use https is recommended. The resource requirements are modest, 2 cores and 4GB memory are more than sufficient. Simulations can be large, easily several GB each, so provision sufficient disk space. <br />
<br />
We can provide a limited amount of resources on our demo server to collaborators. To use our server, first make an ssh key on the machine where you run wrfxpy:<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Then send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
If your request is approved, you will be able to ssh to the demo server without any password.<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following keys in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
"organization": "Organization Name"<br />
"flags": ["Flag 1", "Flag 2", ...]<br />
<br />
If no flags are required, one can specify an empty list or remove the key.<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. They default to wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4490Running WRF-SFIRE with real data in the WRFx system2022-12-13T15:43:18Z<p>Afarguell: /* Install necessary packages */</p>
<hr />
<div>[[Category:WRF-Fire|User's guide]]<br />
[[Category:WRF-SFIRE users guide]]<br />
[[Category:Howtos|Set up WRFx]]<br />
{{users guide}}<br />
<br />
Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure CHEM (optional)===<br />
setenv WRF_CHEM 1<br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available.<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains landuse, elevation, soiltype data, etc for WRF (geogrid.exe to be specific).<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.anaconda.com/archive/Anaconda3-2021.11-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2021.11-Linux-x86_64.sh<br />
./Anaconda3-2021.11-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda create -n wrfx python=3 <br />
conda activate wrfx<br />
conda install -c conda-forge netcdf4 h5py pyhdf pygrib f90nml lxml simplekml scipy pyproj gdal rasterio <br />
conda install -c conda-forge matplotlib basemap paramiko dill psutil flask pytz pandas<br />
pip install MesoPy python-cmr shapely<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://mesowest.utah.edu/ MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token for some [https://earthdata.nasa.gov/ Earthdata] data centers. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"ladds" : "token-from-laads",<br />
"nrt" : "token-from-lance"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running the fuel moisture model, a new MesoWest user can be created in [https://developers.synopticdata.com/ MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. Also, the user can specify a list of tokens to use.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the tokens from the respective data centers can be acquired and replaced in the etc/tokens.json file ([https://ladsweb.modaps.eosdis.nasa.gov/profile/#generate-token LAADS] and [https://nrt3.modaps.eosdis.nasa.gov/profile/app-keys LANCE]). There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====GOES data====<br />
<br />
For getting GOES16 and GOES17 data, the system is using [https://aws.amazon.com/cli/ AWS Command Line Interface]. So, you would need to have it installed. To look if you have already installed it, you can just type<br />
<br />
aws help<br />
<br />
If the command is not found, you can follow installation instructions from [https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html here]. If you are using Linux, you can do:<br />
<br />
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />
unzip awscliv2.zip<br />
./aws/install -i /path/to/lib -b /path/to/bin<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfx<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run a serial real.exe. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE:<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
wrfxweb is a web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb requirements===<br />
<br />
wrfxweb runs in a regular user account on a Linux server equipped with a web server. You need to be able to <br />
* transfer files and execute remote commands on the machine by passwordless ssh with key authentication without a passkey<br />
* access the directory wrfxweb/fdds in your account from the web<br />
<br />
===wrfxweb: server setup===<br />
<br />
You can set up your own server. We are using Ubuntu Linux with nginx web server, but other software should work too. Configuring the web server to use https is recommended. The resource requirements are modest, 2 cores and 4GB memory are more than sufficient. Simulations can be large, easily several GB each, so provision sufficient disk space. <br />
<br />
We can provide a limited amount of resources on our demo server to collaborators. To use our server, first make an ssh key on the machine where you run wrfxpy:<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Then send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
If your request is approved, you will be able to ssh to the demo server without any password.<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following keys in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
"organization": "Organization Name"<br />
"flags": ["Flag 1", "Flag 2", ...]<br />
<br />
If no flags are required, one can specify an empty list or remove the key.<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. They default to wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4489Running WRF-SFIRE with real data in the WRFx system2022-12-13T01:52:42Z<p>Afarguell: /* Install necessary packages */</p>
<hr />
<div>[[Category:WRF-Fire|User's guide]]<br />
[[Category:WRF-SFIRE users guide]]<br />
[[Category:Howtos|Set up WRFx]]<br />
{{users guide}}<br />
<br />
Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure CHEM (optional)===<br />
setenv WRF_CHEM 1<br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available.<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains landuse, elevation, soiltype data, etc for WRF (geogrid.exe to be specific).<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.anaconda.com/archive/Anaconda3-2021.11-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2021.11-Linux-x86_64.sh<br />
./Anaconda3-2021.11-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda create -n wrfx python=3 <br />
conda activate wrfx<br />
conda install -c conda-forge netcdf4 h5py pyhdf pygrib f90nml lxml simplekml scipy pyproj gdal rasterio matplotlib basemap paramiko dill psutil flask pytz pandas<br />
pip install MesoPy python-cmr shapely<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://mesowest.utah.edu/ MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token for some [https://earthdata.nasa.gov/ Earthdata] data centers. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"ladds" : "token-from-laads",<br />
"nrt" : "token-from-lance"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running the fuel moisture model, a new MesoWest user can be created in [https://developers.synopticdata.com/ MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. Also, the user can specify a list of tokens to use.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the tokens from the respective data centers can be acquired and replaced in the etc/tokens.json file ([https://ladsweb.modaps.eosdis.nasa.gov/profile/#generate-token LAADS] and [https://nrt3.modaps.eosdis.nasa.gov/profile/app-keys LANCE]). There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====GOES data====<br />
<br />
For getting GOES16 and GOES17 data, the system is using [https://aws.amazon.com/cli/ AWS Command Line Interface]. So, you would need to have it installed. To look if you have already installed it, you can just type<br />
<br />
aws help<br />
<br />
If the command is not found, you can follow installation instructions from [https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html here]. If you are using Linux, you can do:<br />
<br />
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />
unzip awscliv2.zip<br />
./aws/install -i /path/to/lib -b /path/to/bin<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfx<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run a serial real.exe. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE:<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
wrfxweb is a web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb requirements===<br />
<br />
wrfxweb runs in a regular user account on a Linux server equipped with a web server. You need to be able to <br />
* transfer files and execute remote commands on the machine by passwordless ssh with key authentication without a passkey<br />
* access the directory wrfxweb/fdds in your account from the web<br />
<br />
===wrfxweb: server setup===<br />
<br />
You can set up your own server. We are using Ubuntu Linux with nginx web server, but other software should work too. Configuring the web server to use https is recommended. The resource requirements are modest, 2 cores and 4GB memory are more than sufficient. Simulations can be large, easily several GB each, so provision sufficient disk space. <br />
<br />
We can provide a limited amount of resources on our demo server to collaborators. To use our server, first make an ssh key on the machine where you run wrfxpy:<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Then send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
If your request is approved, you will be able to ssh to the demo server without any password.<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following keys in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
"organization": "Organization Name"<br />
"flags": ["Flag 1", "Flag 2", ...]<br />
<br />
If no flags are required, one can specify an empty list or remove the key.<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. They default to wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4488Running WRF-SFIRE with real data in the WRFx system2022-12-13T01:42:06Z<p>Afarguell: /* Install necessary packages */</p>
<hr />
<div>[[Category:WRF-Fire|User's guide]]<br />
[[Category:WRF-SFIRE users guide]]<br />
[[Category:Howtos|Set up WRFx]]<br />
{{users guide}}<br />
<br />
Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure CHEM (optional)===<br />
setenv WRF_CHEM 1<br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available.<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains landuse, elevation, soiltype data, etc for WRF (geogrid.exe to be specific).<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.anaconda.com/archive/Anaconda3-2021.11-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2021.11-Linux-x86_64.sh<br />
./Anaconda3-2021.11-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda create -n wrfx python=3 <br />
conda activate wrfx<br />
conda install -c conda-forge netcdf4 h5py pyhdf pygrib f90nml lxml simplekml scipy pyproj gdal rasterio proj4 matplotlib basemap paramiko dill psutil flask pytz pandas<br />
pip install MesoPy python-cmr shapely<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://mesowest.utah.edu/ MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token for some [https://earthdata.nasa.gov/ Earthdata] data centers. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"ladds" : "token-from-laads",<br />
"nrt" : "token-from-lance"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running the fuel moisture model, a new MesoWest user can be created in [https://developers.synopticdata.com/ MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. Also, the user can specify a list of tokens to use.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the tokens from the respective data centers can be acquired and replaced in the etc/tokens.json file ([https://ladsweb.modaps.eosdis.nasa.gov/profile/#generate-token LAADS] and [https://nrt3.modaps.eosdis.nasa.gov/profile/app-keys LANCE]). There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====GOES data====<br />
<br />
For getting GOES16 and GOES17 data, the system is using [https://aws.amazon.com/cli/ AWS Command Line Interface]. So, you would need to have it installed. To look if you have already installed it, you can just type<br />
<br />
aws help<br />
<br />
If the command is not found, you can follow installation instructions from [https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html here]. If you are using Linux, you can do:<br />
<br />
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />
unzip awscliv2.zip<br />
./aws/install -i /path/to/lib -b /path/to/bin<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfx<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run a serial real.exe. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE:<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
wrfxweb is a web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb requirements===<br />
<br />
wrfxweb runs in a regular user account on a Linux server equipped with a web server. You need to be able to <br />
* transfer files and execute remote commands on the machine by passwordless ssh with key authentication without a passkey<br />
* access the directory wrfxweb/fdds in your account from the web<br />
<br />
===wrfxweb: server setup===<br />
<br />
You can set up your own server. We are using Ubuntu Linux with nginx web server, but other software should work too. Configuring the web server to use https is recommended. The resource requirements are modest, 2 cores and 4GB memory are more than sufficient. Simulations can be large, easily several GB each, so provision sufficient disk space. <br />
<br />
We can provide a limited amount of resources on our demo server to collaborators. To use our server, first make an ssh key on the machine where you run wrfxpy:<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Then send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
If your request is approved, you will be able to ssh to the demo server without any password.<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following keys in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
"organization": "Organization Name"<br />
"flags": ["Flag 1", "Flag 2", ...]<br />
<br />
If no flags are required, one can specify an empty list or remove the key.<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. They default to wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4487Running WRF-SFIRE with real data in the WRFx system2022-12-13T01:03:57Z<p>Afarguell: /* Install necessary packages */</p>
<hr />
<div>[[Category:WRF-Fire|User's guide]]<br />
[[Category:WRF-SFIRE users guide]]<br />
[[Category:Howtos|Set up WRFx]]<br />
{{users guide}}<br />
<br />
Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure CHEM (optional)===<br />
setenv WRF_CHEM 1<br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available.<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains landuse, elevation, soiltype data, etc for WRF (geogrid.exe to be specific).<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.anaconda.com/archive/Anaconda3-2021.11-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2021.11-Linux-x86_64.sh<br />
./Anaconda3-2021.11-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda update -n base -c defaults conda<br />
conda create -n wrfx python=3 <br />
conda activate wrfx<br />
conda install -c conda-forge netcdf4 h5py pyhdf pygrib f90nml lxml simplekml scipy pyproj gdal rasterio proj4 matplotlib basemap paramiko dill psutil flask pytz pandas<br />
pip install MesoPy python-cmr shapely<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://mesowest.utah.edu/ MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token for some [https://earthdata.nasa.gov/ Earthdata] data centers. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"ladds" : "token-from-laads",<br />
"nrt" : "token-from-lance"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running the fuel moisture model, a new MesoWest user can be created in [https://developers.synopticdata.com/ MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. Also, the user can specify a list of tokens to use.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the tokens from the respective data centers can be acquired and replaced in the etc/tokens.json file ([https://ladsweb.modaps.eosdis.nasa.gov/profile/#generate-token LAADS] and [https://nrt3.modaps.eosdis.nasa.gov/profile/app-keys LANCE]). There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====GOES data====<br />
<br />
For getting GOES16 and GOES17 data, the system is using [https://aws.amazon.com/cli/ AWS Command Line Interface]. So, you would need to have it installed. To look if you have already installed it, you can just type<br />
<br />
aws help<br />
<br />
If the command is not found, you can follow installation instructions from [https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html here]. If you are using Linux, you can do:<br />
<br />
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />
unzip awscliv2.zip<br />
./aws/install -i /path/to/lib -b /path/to/bin<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfx<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run a serial real.exe. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE:<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
wrfxweb is a web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb requirements===<br />
<br />
wrfxweb runs in a regular user account on a Linux server equipped with a web server. You need to be able to <br />
* transfer files and execute remote commands on the machine by passwordless ssh with key authentication without a passkey<br />
* access the directory wrfxweb/fdds in your account from the web<br />
<br />
===wrfxweb: server setup===<br />
<br />
You can set up your own server. We are using Ubuntu Linux with nginx web server, but other software should work too. Configuring the web server to use https is recommended. The resource requirements are modest, 2 cores and 4GB memory are more than sufficient. Simulations can be large, easily several GB each, so provision sufficient disk space. <br />
<br />
We can provide a limited amount of resources on our demo server to collaborators. To use our server, first make an ssh key on the machine where you run wrfxpy:<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Then send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
If your request is approved, you will be able to ssh to the demo server without any password.<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following keys in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
"organization": "Organization Name"<br />
"flags": ["Flag 1", "Flag 2", ...]<br />
<br />
If no flags are required, one can specify an empty list or remove the key.<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. They default to wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4486Running WRF-SFIRE with real data in the WRFx system2022-12-13T01:03:40Z<p>Afarguell: /* Install necessary packages */</p>
<hr />
<div>[[Category:WRF-Fire|User's guide]]<br />
[[Category:WRF-SFIRE users guide]]<br />
[[Category:Howtos|Set up WRFx]]<br />
{{users guide}}<br />
<br />
Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure CHEM (optional)===<br />
setenv WRF_CHEM 1<br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available.<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains landuse, elevation, soiltype data, etc for WRF (geogrid.exe to be specific).<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.anaconda.com/archive/Anaconda3-2021.11-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2021.11-Linux-x86_64.sh<br />
./Anaconda3-2021.11-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda update -n base -c defaults conda<br />
conda create -n wrfx python=3 <br />
conda activate wrfx<br />
conda install -c conda-forge netcdf4 h5py pyhdf pygrib f90nml lxml simplekml scipy pyproj gdal rasterio proj4 matplotlib basemap paramiko dill psutil flask pytz<br />
pip install MesoPy python-cmr shapely<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://mesowest.utah.edu/ MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token for some [https://earthdata.nasa.gov/ Earthdata] data centers. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"ladds" : "token-from-laads",<br />
"nrt" : "token-from-lance"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running the fuel moisture model, a new MesoWest user can be created in [https://developers.synopticdata.com/ MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. Also, the user can specify a list of tokens to use.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the tokens from the respective data centers can be acquired and replaced in the etc/tokens.json file ([https://ladsweb.modaps.eosdis.nasa.gov/profile/#generate-token LAADS] and [https://nrt3.modaps.eosdis.nasa.gov/profile/app-keys LANCE]). There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====GOES data====<br />
<br />
For getting GOES16 and GOES17 data, the system is using [https://aws.amazon.com/cli/ AWS Command Line Interface]. So, you would need to have it installed. To look if you have already installed it, you can just type<br />
<br />
aws help<br />
<br />
If the command is not found, you can follow installation instructions from [https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html here]. If you are using Linux, you can do:<br />
<br />
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />
unzip awscliv2.zip<br />
./aws/install -i /path/to/lib -b /path/to/bin<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfx<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run a serial real.exe. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE:<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
wrfxweb is a web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb requirements===<br />
<br />
wrfxweb runs in a regular user account on a Linux server equipped with a web server. You need to be able to <br />
* transfer files and execute remote commands on the machine by passwordless ssh with key authentication without a passkey<br />
* access the directory wrfxweb/fdds in your account from the web<br />
<br />
===wrfxweb: server setup===<br />
<br />
You can set up your own server. We are using Ubuntu Linux with nginx web server, but other software should work too. Configuring the web server to use https is recommended. The resource requirements are modest, 2 cores and 4GB memory are more than sufficient. Simulations can be large, easily several GB each, so provision sufficient disk space. <br />
<br />
We can provide a limited amount of resources on our demo server to collaborators. To use our server, first make an ssh key on the machine where you run wrfxpy:<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Then send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
If your request is approved, you will be able to ssh to the demo server without any password.<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following keys in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
"organization": "Organization Name"<br />
"flags": ["Flag 1", "Flag 2", ...]<br />
<br />
If no flags are required, one can specify an empty list or remove the key.<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. They default to wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4485Running WRF-SFIRE with real data in the WRFx system2022-12-13T00:49:15Z<p>Afarguell: /* Clone github repository */</p>
<hr />
<div>[[Category:WRF-Fire|User's guide]]<br />
[[Category:WRF-SFIRE users guide]]<br />
[[Category:Howtos|Set up WRFx]]<br />
{{users guide}}<br />
<br />
Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure CHEM (optional)===<br />
setenv WRF_CHEM 1<br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available.<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains landuse, elevation, soiltype data, etc for WRF (geogrid.exe to be specific).<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.anaconda.com/archive/Anaconda3-2021.11-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2021.11-Linux-x86_64.sh<br />
./Anaconda3-2021.11-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda update -n base -c defaults conda<br />
conda create -n wrfx python=3 <br />
conda activate wrfx<br />
conda install -c conda-forge netcdf4 h5py pyhdf pygrib f90nml lxml simplekml scipy pyproj gdal rasterio proj4 matplotlib=3.2.2 basemap paramiko dill psutil flask pytz<br />
pip install MesoPy python-cmr shapely<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://mesowest.utah.edu/ MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token for some [https://earthdata.nasa.gov/ Earthdata] data centers. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"ladds" : "token-from-laads",<br />
"nrt" : "token-from-lance"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running the fuel moisture model, a new MesoWest user can be created in [https://developers.synopticdata.com/ MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. Also, the user can specify a list of tokens to use.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the tokens from the respective data centers can be acquired and replaced in the etc/tokens.json file ([https://ladsweb.modaps.eosdis.nasa.gov/profile/#generate-token LAADS] and [https://nrt3.modaps.eosdis.nasa.gov/profile/app-keys LANCE]). There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====GOES data====<br />
<br />
For getting GOES16 and GOES17 data, the system is using [https://aws.amazon.com/cli/ AWS Command Line Interface]. So, you would need to have it installed. To look if you have already installed it, you can just type<br />
<br />
aws help<br />
<br />
If the command is not found, you can follow installation instructions from [https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html here]. If you are using Linux, you can do:<br />
<br />
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />
unzip awscliv2.zip<br />
./aws/install -i /path/to/lib -b /path/to/bin<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfx<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run a serial real.exe. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE:<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
wrfxweb is a web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb requirements===<br />
<br />
wrfxweb runs in a regular user account on a Linux server equipped with a web server. You need to be able to <br />
* transfer files and execute remote commands on the machine by passwordless ssh with key authentication without a passkey<br />
* access the directory wrfxweb/fdds in your account from the web<br />
<br />
===wrfxweb: server setup===<br />
<br />
You can set up your own server. We are using Ubuntu Linux with nginx web server, but other software should work too. Configuring the web server to use https is recommended. The resource requirements are modest, 2 cores and 4GB memory are more than sufficient. Simulations can be large, easily several GB each, so provision sufficient disk space. <br />
<br />
We can provide a limited amount of resources on our demo server to collaborators. To use our server, first make an ssh key on the machine where you run wrfxpy:<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Then send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
If your request is approved, you will be able to ssh to the demo server without any password.<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following keys in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
"organization": "Organization Name"<br />
"flags": ["Flag 1", "Flag 2", ...]<br />
<br />
If no flags are required, one can specify an empty list or remove the key.<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. They default to wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4482Running WRF-SFIRE with real data in the WRFx system2022-12-07T00:25:29Z<p>Afarguell: /* Install necessary packages */</p>
<hr />
<div>[[Category:WRF-Fire|User's guide]]<br />
[[Category:WRF-SFIRE users guide]]<br />
[[Category:Howtos|Set up WRFx]]<br />
{{users guide}}<br />
<br />
Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains landuse, elevation, soiltype data, etc for WRF (geogrid.exe to be specific).<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.anaconda.com/archive/Anaconda3-2021.11-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2021.11-Linux-x86_64.sh<br />
./Anaconda3-2021.11-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda update -n base -c defaults conda<br />
conda create -n wrfx python=3 <br />
conda activate wrfx<br />
conda install -c conda-forge netcdf4 h5py pyhdf pygrib f90nml lxml simplekml scipy pyproj gdal rasterio proj4 matplotlib=3.2.2 basemap paramiko dill psutil flask pytz<br />
pip install MesoPy python-cmr shapely<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
and in wrxpy folder<br />
git checkout angel<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://mesowest.utah.edu/ MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token for some [https://earthdata.nasa.gov/ Earthdata] data centers. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"ladds" : "token-from-laads",<br />
"nrt" : "token-from-lance"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running the fuel moisture model, a new MesoWest user can be created in [https://developers.synopticdata.com/ MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. Also, the user can specify a list of tokens to use.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the tokens from the respective data centers can be acquired and replaced in the etc/tokens.json file ([https://ladsweb.modaps.eosdis.nasa.gov/profile/#generate-token LAADS] and [https://nrt3.modaps.eosdis.nasa.gov/profile/app-keys LANCE]). There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====GOES data====<br />
<br />
For getting GOES16 and GOES17 data, the system is using [https://aws.amazon.com/cli/ AWS Command Line Interface]. So, you would need to have it installed. To look if you have already installed it, you can just type<br />
<br />
aws help<br />
<br />
If the command is not found, you can follow installation instructions from [https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html here]. If you are using Linux, you can do:<br />
<br />
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />
unzip awscliv2.zip<br />
./aws/install -i /path/to/lib -b /path/to/bin<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfx<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run a serial real.exe. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE:<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
wrfxweb is a web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb requirements===<br />
<br />
wrfxweb runs in a regular user account on a Linux server equipped with a web server. You need to be able to <br />
* transfer files and execute remote commands on the machine by passwordless ssh with key authentication without a passkey<br />
* access the directory wrfxweb/fdds in your account from the web<br />
<br />
===wrfxweb: server setup===<br />
<br />
You can set up your own server. We are using Ubuntu Linux with nginx web server, but other software should work too. Configuring the web server to use https is recommended. The resource requirements are modest, 2 cores and 4GB memory are more than sufficient. Simulations can be large, easily several GB each, so provision sufficient disk space. <br />
<br />
We can provide a limited amount of resources on our demo server to collaborators. To use our server, first make an ssh key on the machine where you run wrfxpy:<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Then send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
If your request is approved, you will be able to ssh to the demo server without any password.<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following keys in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
"organization": "Organization Name"<br />
"flags": ["Flag 1", "Flag 2", ...]<br />
<br />
If no flags are required, one can specify an empty list or remove the key.<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. They default to wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Namelist.fire&diff=4481Namelist.fire2022-12-02T19:47:23Z<p>Afarguell: </p>
<hr />
<div>{{users guide}}<br />
<br />
<br />
This file serves to redefine the fuel categories if the user wishes to alter default fuel properties.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Variable name<br />
! Description<br />
|-<br />
|'''&fuel_scalars'''<br />
| Scalar fuel constants, common to all fuel categories.<br />
|-<br />
|cmbcnst<br />
| The energy released per unit fuel burned for cellulosic fuels (constant, 1.7433e7 J kg-1).<br />
|-<br />
|hfgl <br />
|The threshold heat flux from a surface fire at which point a canopy fire is ignited above (in W m-2).<br />
|-<br />
|fuelmc_g <br />
|Surface fuel, fuel moisture content (kg/kg, between 0.00 and 1.00).<br />
|-<br />
|fuelmc_c <br />
|Canopy fuel, fuel moisture content (kg/kg, between 0.00 and 1.00).<br />
|-<br />
|nfuelcats<br />
| Number of fuel categories defined (default: 13)<br />
|-<br />
|no_fuel_cat<br />
| The first fuel category to be ignored and taken as ‘no fuel’ (default: 14)<br />
|-<br />
|no_fuel_cat2<br />
| The last fuel category to be ignored and taken as ‘no fuel’ (default: 14). That is, all fuel with categories between no_fuel_cat and no_fuel_cat2 is ignored. Fuel with category 0 is also always ignored.<br />
|-<br />
|-<br />
|'''&fuel_categories'''<br />
| One number per fuel category.<br />
|-<br />
|windrf<br />
| Wind reduction factor from 20ft to midflame height (1)<br />
|-<br />
|fgi <br />
|The initial mass loading of surface fuel (kg m<sup>-2</sup>) in each fuel category<br />
|-<br />
|fueldepthm <br />
|Fuel depth (m)<br />
|-<br />
|savr<br />
| Fuel Surface-area-to-volume-ratio (ft-1)<br />
|-<br />
|fuelmce<br />
| Fuel moisture content of extinction (kg/kg, from 0.00 – 1.00).<br />
|-<br />
|fueldens <br />
|Fuel particle density lb ft<sup>-3</sup> (32 if solid, 19 if rotten)<br />
|-<br />
|st<br />
| Fuel particle total mineral content. (kg minerals/kg wood)<br />
|-<br />
|se<br />
| Fuel particle effective mineral content. (kg minerals – kg silica)/kg wood<br />
|-<br />
|weight<br />
| Weighting parameter that determines the slope of the mass loss curve. This can range from about 5 (fast burnup) to 1000 ( 40% decrease in mass over 10 minutes).<br />
|-<br />
|fci_d <br />
|Initial dry mass loading of canopy fuel (in kg m<sup>-2</sup>)<br />
|-<br />
|fct<br />
| The burnout time of canopy fuel once ignited (s)<br />
|-<br />
|ichap<br />
| Is this a chaparral category to be treated differently using an empirical rate of spread relationship that depends only on windspeed<br />
|-<br />
|<br />
|1: yes, this is a chaparral category and should be treated differently<br />
|-<br />
|<br />
|0: no, this is not a chaparral category or should not be treated differently. <br />
|-<br />
|<br />
|Primarily used for Fuel Category 4.<br />
|-<br />
| fmc_gw01<br />
| The proportion of the fuel that is in moisture class 1 (between 0.0 and 1.0)<br />
|-<br />
| fmc_gw02<br />
| The proportion of the fuel that is in moisture class 2, etc. up to 5. The proportions should add up to 1.<br />
|-<br />
|-<br />
|'''&moisture'''<br />
| The [[fuel moisture model]] namelist.<br />
|-<br />
| moisture_classes<br />
| Number of moisture classes, at most 5. Each fuel consists of a mixture of moisture classes. Following are one number per class.<br />
|-<br />
| drying_model <br />
| number of the drying model used in each class. At the moment only model 1 is supported. Model parameters follow for each class.<br />
|-<br />
| drying_lag<br />
| hours it takes to get 64% closer to a equilibrium moisture contents<br />
|-<br />
| wetting_model<br />
| number of the wetting model used in each class. At the moment only model 1 is supported. Model parameters follow for each class.<br />
|-<br />
| wetting_lag<br />
| hours of very strong rain it takes to get 64% closer to saturation<br />
|-<br />
| saturation_moisture<br />
| the maximal fuel moisture contents (in kg/kg, default 2.5)<br />
|-<br />
| saturation_rain<br />
| rain intensity (mm/h) such that the rate of soaking is at 64% of wetting_lag<br />
|- <br />
| rain_treshold<br />
| rain under this intensity (mm/h) does not count<br />
|-<br />
| fmc_gc_initialization<br />
| moisture initialization 0 = from WRF input (files '''wrfinput''' or '''wrfrst'''), 1 = from scalar fuelmc_g in namelist.input 2 = from equilibrium<br />
|}<br />
<br />
[[Category:WRF-Fire]]</div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4464Running WRF-SFIRE with real data in the WRFx system2022-09-15T16:36:26Z<p>Afarguell: /* Install necessary packages */</p>
<hr />
<div>[[Category:WRF-Fire|User's guide]]<br />
[[Category:WRF-SFIRE users guide]]<br />
[[Category:Howtos|Set up WRFx]]<br />
{{users guide}}<br />
<br />
Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains landuse, elevation, soiltype data, etc for WRF (geogrid.exe to be specific).<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.anaconda.com/archive/Anaconda3-2021.11-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2021.11-Linux-x86_64.sh<br />
./Anaconda3-2021.11-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda update -n base -c defaults conda<br />
conda create -n wrfx python=3 <br />
conda activate wrfx<br />
conda install -c conda-forge netcdf4 h5py pyhdf pygrib f90nml lxml simplekml scipy pyproj gdal rasterio proj4 matplotlib=3.2.2 basemap paramiko dill psutil flask<br />
pip install MesoPy python-cmr shapely<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
and in wrxpy folder<br />
git checkout angel<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://mesowest.utah.edu/ MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token for some [https://earthdata.nasa.gov/ Earthdata] data centers. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"ladds" : "token-from-laads",<br />
"nrt" : "token-from-lance"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running the fuel moisture model, a new MesoWest user can be created in [https://developers.synopticdata.com/ MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. Also, the user can specify a list of tokens to use.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the tokens from the respective data centers can be acquired and replaced in the etc/tokens.json file ([https://ladsweb.modaps.eosdis.nasa.gov/profile/#generate-token LAADS] and [https://nrt3.modaps.eosdis.nasa.gov/profile/app-keys LANCE]). There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====GOES data====<br />
<br />
For getting GOES16 and GOES17 data, the system is using [https://aws.amazon.com/cli/ AWS Command Line Interface]. So, you would need to have it installed. To look if you have already installed it, you can just type<br />
<br />
aws help<br />
<br />
If the command is not found, you can follow installation instructions from [https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html here]. If you are using Linux, you can do:<br />
<br />
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />
unzip awscliv2.zip<br />
./aws/install -i /path/to/lib -b /path/to/bin<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfx<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run a serial real.exe. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE:<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
wrfxweb is a web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb requirements===<br />
<br />
wrfxweb runs in a regular user account on a Linux server equipped with a web server. You need to be able to <br />
* transfer files and execute remote commands on the machine by passwordless ssh with key authentication without a passkey<br />
* access the directory wrfxweb/fdds in your account from the web<br />
<br />
===wrfxweb: server setup===<br />
<br />
You can set up your own server. We are using Ubuntu Linux with nginx web server, but other software should work too. Configuring the web server to use https is recommended. The resource requirements are modest, 2 cores and 4GB memory are more than sufficient. Simulations can be large, easily several GB each, so provision sufficient disk space. <br />
<br />
We can provide a limited amount of resources on our demo server to collaborators. To use our server, first make an ssh key on the machine where you run wrfxpy:<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Then send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
If your request is approved, you will be able to ssh to the demo server without any password.<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following keys in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
"organization": "Organization Name"<br />
"flags": ["Flag 1", "Flag 2", ...]<br />
<br />
If no flags are required, one can specify an empty list or remove the key.<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. They default to wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4456Running WRF-SFIRE with real data in the WRFx system2022-03-18T17:06:50Z<p>Afarguell: /* Install Anaconda distribution */</p>
<hr />
<div>[[Category:WRF-Fire|User's guide]]<br />
[[Category:WRF-SFIRE users guide]]<br />
[[Category:Howtos|Set up WRFx]]<br />
{{users guide}}<br />
<br />
Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains landuse, elevation, soiltype data, etc for WRF (geogrid.exe to be specific).<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.anaconda.com/archive/Anaconda3-2021.11-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2021.11-Linux-x86_64.sh<br />
./Anaconda3-2021.11-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda update -n base -c defaults conda<br />
conda create -n wrfx python=3 gdal netcdf4 pyproj paramiko dill h5py psutil proj4 pytz scipy matplotlib=3.2.2 flask pandas requests lxml<br />
conda activate wrfx<br />
conda install -c conda-forge simplekml pygrib f90nml pyhdf xmltodict basemap cartopy rasterio<br />
pip install MesoPy python-cmr shapely<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
and in wrxpy folder<br />
git checkout angel<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://mesowest.utah.edu/ MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token for some [https://earthdata.nasa.gov/ Earthdata] data centers. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"ladds" : "token-from-laads",<br />
"nrt" : "token-from-lance"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running the fuel moisture model, a new MesoWest user can be created in [https://developers.synopticdata.com/ MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. Also, the user can specify a list of tokens to use.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the tokens from the respective data centers can be acquired and replaced in the etc/tokens.json file ([https://ladsweb.modaps.eosdis.nasa.gov/profile/#generate-token LAADS] and [https://nrt3.modaps.eosdis.nasa.gov/profile/app-keys LANCE]). There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====GOES data====<br />
<br />
For getting GOES16 and GOES17 data, the system is using [https://aws.amazon.com/cli/ AWS Command Line Interface]. So, you would need to have it installed. To look if you have already installed it, you can just type<br />
<br />
aws help<br />
<br />
If the command is not found, you can follow installation instructions from [https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html here]. If you are using Linux, you can do:<br />
<br />
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />
unzip awscliv2.zip<br />
./aws/install -i /path/to/lib -b /path/to/bin<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfx<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run a serial real.exe. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE:<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
wrfxweb is a web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb requirements===<br />
<br />
wrfxweb runs in a regular user account on a Linux server equipped with a web server. You need to be able to <br />
* transfer files and execute remote commands on the machine by passwordless ssh with key authentication without a passkey<br />
* access the directory wrfxweb/fdds in your account from the web<br />
<br />
===wrfxweb: server setup===<br />
<br />
You can set up your own server. We are using Ubuntu Linux with nginx web server, but other software should work too. Configuring the web server to use https is recommended. The resource requirements are modest, 2 cores and 4GB memory are more than sufficient. Simulations can be large, easily several GB each, so provision sufficient disk space. <br />
<br />
We can provide a limited amount of resources on our demo server to collaborators. To use our server, first make an ssh key on the machine where you run wrfxpy:<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Then send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
If your request is approved, you will be able to ssh to the demo server without any password.<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following keys in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
"organization": "Organization Name"<br />
"flags": ["Flag 1", "Flag 2", ...]<br />
<br />
If no flags are required, one can specify an empty list or remove the key.<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. They default to wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4452Running WRF-SFIRE with real data in the WRFx system2021-11-22T17:20:29Z<p>Afarguell: /* Simple forecast */</p>
<hr />
<div>[[Category:WRF-Fire|User's guide]]<br />
[[Category:WRF-SFIRE users guide]]<br />
[[Category:Howtos|Set up WRFx]]<br />
{{users guide}}<br />
<br />
Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains landuse, elevation, soiltype data, etc for WRF (geogrid.exe to be specific).<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.continuum.io/archive/Anaconda3-2020.02-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2020.02-Linux-x86_64.sh<br />
./Anaconda3-2020.02-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda update -n base -c defaults conda<br />
conda create -n wrfx python=3 gdal netcdf4 pyproj paramiko dill h5py psutil proj4 pytz scipy matplotlib=3.2.2 flask pandas requests lxml<br />
conda activate wrfx<br />
conda install -c conda-forge simplekml pygrib f90nml pyhdf xmltodict basemap cartopy rasterio<br />
pip install MesoPy python-cmr shapely<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
and in wrxpy folder<br />
git checkout angel<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://mesowest.utah.edu/ MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token for some [https://earthdata.nasa.gov/ Earthdata] data centers. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"ladds" : "token-from-laads",<br />
"nrt" : "token-from-lance"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running the fuel moisture model, a new MesoWest user can be created in [https://developers.synopticdata.com/ MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. Also, the user can specify a list of tokens to use.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the tokens from the respective data centers can be acquired and replaced in the etc/tokens.json file ([https://ladsweb.modaps.eosdis.nasa.gov/profile/#generate-token LAADS] and [https://nrt3.modaps.eosdis.nasa.gov/profile/app-keys LANCE]). There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====GOES data====<br />
<br />
For getting GOES16 and GOES17 data, the system is using [https://aws.amazon.com/cli/ AWS Command Line Interface]. So, you would need to have it installed. To look if you have already installed it, you can just type<br />
<br />
aws help<br />
<br />
If the command is not found, you can follow installation instructions from [https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html here]. If you are using Linux, you can do:<br />
<br />
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />
unzip awscliv2.zip<br />
./aws/install -i /path/to/lib -b /path/to/bin<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfx<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run a serial real.exe. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE:<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
wrfxweb is a web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb requirements===<br />
<br />
wrfxweb runs in a regular user account on a Linux server equipped with a web server. You need to be able to <br />
* transfer files and execute remote commands on the machine by passwordless ssh with key authentication without a passkey<br />
* access the directory wrfxweb/fdds in your account from the web<br />
<br />
===wrfxweb: server setup===<br />
<br />
You can set up your own server. We are using Ubuntu Linux with nginx web server, but other software should work too. Configuring the web server to use https is recommended. The resource requirements are modest, 2 cores and 4GB memory are more than sufficient. Simulations can be large, easily several GB each, so provision sufficient disk space. <br />
<br />
We can provide a limited amount of resources on our demo server to collaborators. To use our server, first make an ssh key on the machine where you run wrfxpy:<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Then send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
If your request is approved, you will be able to ssh to the demo server without any password.<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following keys in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
"organization": "Organization Name"<br />
"flags": ["Flag 1", "Flag 2", ...]<br />
<br />
If no flags are required, one can specify an empty list or remove the key.<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. They default to wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4451Running WRF-SFIRE with real data in the WRFx system2021-11-22T17:14:13Z<p>Afarguell: /* Install necessary packages */</p>
<hr />
<div>[[Category:WRF-Fire|User's guide]]<br />
[[Category:WRF-SFIRE users guide]]<br />
[[Category:Howtos|Set up WRFx]]<br />
{{users guide}}<br />
<br />
Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains landuse, elevation, soiltype data, etc for WRF (geogrid.exe to be specific).<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.continuum.io/archive/Anaconda3-2020.02-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2020.02-Linux-x86_64.sh<br />
./Anaconda3-2020.02-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda update -n base -c defaults conda<br />
conda create -n wrfx python=3 gdal netcdf4 pyproj paramiko dill h5py psutil proj4 pytz scipy matplotlib=3.2.2 flask pandas requests lxml<br />
conda activate wrfx<br />
conda install -c conda-forge simplekml pygrib f90nml pyhdf xmltodict basemap cartopy rasterio<br />
pip install MesoPy python-cmr shapely<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
and in wrxpy folder<br />
git checkout angel<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://mesowest.utah.edu/ MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token for some [https://earthdata.nasa.gov/ Earthdata] data centers. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"ladds" : "token-from-laads",<br />
"nrt" : "token-from-lance"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running the fuel moisture model, a new MesoWest user can be created in [https://developers.synopticdata.com/ MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. Also, the user can specify a list of tokens to use.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the tokens from the respective data centers can be acquired and replaced in the etc/tokens.json file ([https://ladsweb.modaps.eosdis.nasa.gov/profile/#generate-token LAADS] and [https://nrt3.modaps.eosdis.nasa.gov/profile/app-keys LANCE]). There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====GOES data====<br />
<br />
For getting GOES16 and GOES17 data, the system is using [https://aws.amazon.com/cli/ AWS Command Line Interface]. So, you would need to have it installed. To look if you have already installed it, you can just type<br />
<br />
aws help<br />
<br />
If the command is not found, you can follow installation instructions from [https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html here]. If you are using Linux, you can do:<br />
<br />
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />
unzip awscliv2.zip<br />
./aws/install -i /path/to/lib -b /path/to/bin<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfxpy<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run a serial real.exe. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE:<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
wrfxweb is a web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb requirements===<br />
<br />
wrfxweb runs in a regular user account on a Linux server equipped with a web server. You need to be able to <br />
* transfer files and execute remote commands on the machine by passwordless ssh with key authentication without a passkey<br />
* access the directory wrfxweb/fdds in your account from the web<br />
<br />
===wrfxweb: server setup===<br />
<br />
You can set up your own server. We are using Ubuntu Linux with nginx web server, but other software should work too. Configuring the web server to use https is recommended. The resource requirements are modest, 2 cores and 4GB memory are more than sufficient. Simulations can be large, easily several GB each, so provision sufficient disk space. <br />
<br />
We can provide a limited amount of resources on our demo server to collaborators. To use our server, first make an ssh key on the machine where you run wrfxpy:<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Then send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
If your request is approved, you will be able to ssh to the demo server without any password.<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following keys in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
"organization": "Organization Name"<br />
"flags": ["Flag 1", "Flag 2", ...]<br />
<br />
If no flags are required, one can specify an empty list or remove the key.<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. They default to wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4408Running WRF-SFIRE with real data in the WRFx system2021-08-20T14:59:47Z<p>Afarguell: /* Install necessary packages */</p>
<hr />
<div>[[Category:WRF-Fire|User's guide]]<br />
[[Category:WRF-SFIRE users guide]]<br />
[[Category:Howtos|Set up WRFx]]<br />
{{users guide}}<br />
<br />
Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains landuse, elevation, soiltype data, etc for WRF (geogrid.exe to be specific).<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.continuum.io/archive/Anaconda3-2020.02-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2020.02-Linux-x86_64.sh<br />
./Anaconda3-2020.02-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda update -n base -c defaults conda<br />
conda create -n wrfx python=3 gdal netcdf4 pyproj paramiko dill h5py psutil proj4 pytz scipy matplotlib=3.2.2 flask pandas requests lxml<br />
conda activate wrfx<br />
conda install -c conda-forge simplekml pygrib f90nml pyhdf xmltodict basemap rasterio<br />
pip install MesoPy python-cmr shapely<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
and in wrxpy folder<br />
git checkout angel<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://mesowest.utah.edu/ MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token for some [https://earthdata.nasa.gov/ Earthdata] data centers. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"ladds" : "token-from-laads",<br />
"nrt" : "token-from-lance"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running the fuel moisture model, a new MesoWest user can be created in [https://developers.synopticdata.com/ MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. Also, the user can specify a list of tokens to use.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the tokens from the respective data centers can be acquired and replaced in the etc/tokens.json file ([https://ladsweb.modaps.eosdis.nasa.gov/profile/#generate-token LAADS] and [https://nrt3.modaps.eosdis.nasa.gov/profile/app-keys LANCE]). There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====GOES data====<br />
<br />
For getting GOES16 and GOES17 data, the system is using [https://aws.amazon.com/cli/ AWS Command Line Interface]. So, you would need to have it installed. To look if you have already installed it, you can just type<br />
<br />
aws help<br />
<br />
If the command is not found, you can follow installation instructions from [https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html here]. If you are using Linux, you can do:<br />
<br />
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />
unzip awscliv2.zip<br />
./aws/install -i /path/to/lib -b /path/to/bin<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfxpy<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run a serial real.exe. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE:<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
wrfxweb is a web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb requirements===<br />
<br />
wrfxweb runs in a regular user account on a Linux server equipped with a web server. You need to be able to <br />
* transfer files and execute remote commands on the machine by passwordless ssh with key authentication without a passkey<br />
* access the directory wrfxweb/fdds in your account from the web<br />
<br />
===wrfxweb: server setup===<br />
<br />
You can set up your own server. We are using Ubuntu Linux with nginx web server, but other software should work too. Configuring the web server to use https is recommended. The resource requirements are modest, 2 cores and 4GB memory are more than sufficient. Simulations can be large, easily several GB each, so provision sufficient disk space. <br />
<br />
We can provide a limited amount of resources on our demo server to collaborators. To use our server, first make an ssh key on the machine where you run wrfxpy:<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Then send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
If your request is approved, you will be able to ssh to the demo server without any password.<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following keys in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
"organization": "Organization Name"<br />
"flags": ["Flag 1", "Flag 2", ...]<br />
<br />
If no flags are required, one can specify an empty list or remove the key.<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. They default to wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4346Running WRF-SFIRE with real data in the WRFx system2021-06-11T03:02:08Z<p>Afarguell: /* Tokens configuration */</p>
<hr />
<div>[[Category:WRF-Fire|User's guide]]<br />
[[Category:WRF-SFIRE users guide]]<br />
[[Category:Howtos|Set up WRFx]]<br />
{{users guide}}<br />
<br />
Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/users/prepare_for_compilation.html for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains landuse, elevation, soiltype data, etc for WRF (geogrid.exe to be psecific).<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.continuum.io/archive/Anaconda3-2020.02-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2020.02-Linux-x86_64.sh<br />
./Anaconda3-2020.02-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda update -n base -c defaults conda<br />
conda create -n wrfx python=3 gdal netcdf4 pyproj paramiko dill h5py psutil proj4 pytz scipy matplotlib=3.2.2 flask pandas requests<br />
conda activate wrfx<br />
conda install -c conda-forge simplekml pygrib f90nml pyhdf xmltodict basemap rasterio<br />
pip install MesoPy python-cmr<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
and in wrxpy folder<br />
git checkout angel<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://mesowest.utah.edu/ MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token for some [https://earthdata.nasa.gov/ Earthdata] data centers. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"ladds" : "token-from-laads",<br />
"nrt" : "token-from-lance"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running the fuel moisture model, a new MesoWest user can be created in [https://developers.synopticdata.com/ MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. Also, the user can specify a list of tokens to use.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the tokens from the respective data centers can be acquired and replaced in the etc/tokens.json file ([https://ladsweb.modaps.eosdis.nasa.gov/profile/#generate-token LAADS] and [https://nrt3.modaps.eosdis.nasa.gov/profile/app-keys LANCE]). There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====GOES data====<br />
<br />
For getting GOES16 and GOES17 data, the system is using [https://aws.amazon.com/cli/ AWS Command Line Interface]. So, you would need to have it installed. To look if you have already installed it, you can just type<br />
<br />
aws help<br />
<br />
If the command is not found, you can follow installation instructions from [https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html here]. If you are using Linux, you can do:<br />
<br />
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />
unzip awscliv2.zip<br />
./aws/install -i /path/to/lib -b /path/to/bin<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfxpy<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run real.exe in serial. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
Web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb: Account creation===<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
After that, you will receive an answer from Jan and you will be able to ssh the demo server without any password (only the passcode from the id_rsa key if you set one).<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following keys in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
"organization": "Organization Name"<br />
"flags": ["Flag 1", "Flag 2", ...]<br />
<br />
If no flags are required, one can specify an empty list or remove the key.<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. By default they are defined to be in wrfxctrl directories wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4345Running WRF-SFIRE with real data in the WRFx system2021-06-11T03:01:04Z<p>Afarguell: /* Tokens configuration */</p>
<hr />
<div>[[Category:WRF-Fire|User's guide]]<br />
[[Category:WRF-SFIRE users guide]]<br />
[[Category:Howtos|Set up WRFx]]<br />
{{users guide}}<br />
<br />
Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/users/prepare_for_compilation.html for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains landuse, elevation, soiltype data, etc for WRF (geogrid.exe to be psecific).<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.continuum.io/archive/Anaconda3-2020.02-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2020.02-Linux-x86_64.sh<br />
./Anaconda3-2020.02-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda update -n base -c defaults conda<br />
conda create -n wrfx python=3 gdal netcdf4 pyproj paramiko dill h5py psutil proj4 pytz scipy matplotlib=3.2.2 flask pandas requests<br />
conda activate wrfx<br />
conda install -c conda-forge simplekml pygrib f90nml pyhdf xmltodict basemap rasterio<br />
pip install MesoPy python-cmr<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
and in wrxpy folder<br />
git checkout angel<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://mesowest.utah.edu/ MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token for some [https://earthdata.nasa.gov/ Earthdata] data centers. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"ladds" : "token-from-laads",<br />
"nrt" : "token-from-lance"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running the fuel moisture model, a new MesoWest user can be created in [https://developers.synopticdata.com/ MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the tokens from the respective data centers can be acquired and replaced in the etc/tokens.json file ([https://ladsweb.modaps.eosdis.nasa.gov/profile/#generate-token LAADS] and [https://nrt3.modaps.eosdis.nasa.gov/profile/app-keys LANCE]). There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====GOES data====<br />
<br />
For getting GOES16 and GOES17 data, the system is using [https://aws.amazon.com/cli/ AWS Command Line Interface]. So, you would need to have it installed. To look if you have already installed it, you can just type<br />
<br />
aws help<br />
<br />
If the command is not found, you can follow installation instructions from [https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html here]. If you are using Linux, you can do:<br />
<br />
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />
unzip awscliv2.zip<br />
./aws/install -i /path/to/lib -b /path/to/bin<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfxpy<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run real.exe in serial. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
Web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb: Account creation===<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
After that, you will receive an answer from Jan and you will be able to ssh the demo server without any password (only the passcode from the id_rsa key if you set one).<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following keys in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
"organization": "Organization Name"<br />
"flags": ["Flag 1", "Flag 2", ...]<br />
<br />
If no flags are required, one can specify an empty list or remove the key.<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. By default they are defined to be in wrfxctrl directories wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4344Running WRF-SFIRE with real data in the WRFx system2021-05-24T17:21:25Z<p>Afarguell: /* Tokens configuration */</p>
<hr />
<div>[[Category:WRF-Fire|User's guide]]<br />
[[Category:WRF-SFIRE users guide]]<br />
[[Category:Howtos|Set up WRFx]]<br />
{{users guide}}<br />
<br />
Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/users/prepare_for_compilation.html for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains landuse, elevation, soiltype data, etc for WRF (geogrid.exe to be psecific).<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.continuum.io/archive/Anaconda3-2020.02-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2020.02-Linux-x86_64.sh<br />
./Anaconda3-2020.02-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda update -n base -c defaults conda<br />
conda create -n wrfx python=3 gdal netcdf4 pyproj paramiko dill h5py psutil proj4 pytz scipy matplotlib=3.2.2 flask pandas requests<br />
conda activate wrfx<br />
conda install -c conda-forge simplekml pygrib f90nml pyhdf xmltodict basemap rasterio<br />
pip install MesoPy python-cmr<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
and in wrxpy folder<br />
git checkout angel<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://mesowest.utah.edu/ MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token from an [https://earthdata.nasa.gov/ Earthdata] user. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"appkey" : "token-from-earthdata"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running fuel moisture model, a new MesoWest user can be created in [https://mesowest.utah.edu/cgi-bin/droman/my_join.cgi?came_from=http://mesowest.utah.edu MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====GOES data====<br />
<br />
For getting GOES16 and GOES17 data, the system is using [https://aws.amazon.com/cli/ AWS Command Line Interface]. So, you would need to have it installed. To look if you have already installed it, you can just type<br />
<br />
aws help<br />
<br />
If the command is not found, you can follow installation instructions from [https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html here]. If you are using Linux, you can do:<br />
<br />
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />
unzip awscliv2.zip<br />
./aws/install -i /path/to/lib -b /path/to/bin<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfxpy<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run real.exe in serial. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
Web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb: Account creation===<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
After that, you will receive an answer from Jan and you will be able to ssh the demo server without any password (only the passcode from the id_rsa key if you set one).<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following keys in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
"organization": "Organization Name"<br />
"flags": ["Flag 1", "Flag 2", ...]<br />
<br />
If no flags are required, one can specify an empty list or remove the key.<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. By default they are defined to be in wrfxctrl directories wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4343Running WRF-SFIRE with real data in the WRFx system2021-05-18T16:51:35Z<p>Afarguell: /* WRFx: wrfxpy */</p>
<hr />
<div>[[Category:WRF-Fire|User's guide]]<br />
[[Category:WRF-SFIRE users guide]]<br />
[[Category:Howtos|Set up WRFx]]<br />
{{users guide}}<br />
<br />
Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/users/prepare_for_compilation.html for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains landuse, elevation, soiltype data, etc for WRF (geogrid.exe to be psecific).<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.continuum.io/archive/Anaconda3-2020.02-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2020.02-Linux-x86_64.sh<br />
./Anaconda3-2020.02-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda update -n base -c defaults conda<br />
conda create -n wrfx python=3 gdal netcdf4 pyproj paramiko dill h5py psutil proj4 pytz scipy matplotlib=3.2.2 flask pandas requests<br />
conda activate wrfx<br />
conda install -c conda-forge simplekml pygrib f90nml pyhdf xmltodict basemap rasterio<br />
pip install MesoPy python-cmr<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
and in wrxpy folder<br />
git checkout angel<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://simplekml.readthedocs.org/en/latest MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token from an [https://earthdata.nasa.gov/ Earthdata] user. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"appkey" : "token-from-earthdata"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running fuel moisture model, a new MesoWest user can be created in [https://mesowest.utah.edu/cgi-bin/droman/my_join.cgi?came_from=http://mesowest.utah.edu MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====GOES data====<br />
<br />
For getting GOES16 and GOES17 data, the system is using [https://aws.amazon.com/cli/ AWS Command Line Interface]. So, you would need to have it installed. To look if you have already installed it, you can just type<br />
<br />
aws help<br />
<br />
If the command is not found, you can follow installation instructions from [https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html here]. If you are using Linux, you can do:<br />
<br />
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />
unzip awscliv2.zip<br />
./aws/install -i /path/to/lib -b /path/to/bin<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfxpy<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run real.exe in serial. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
Web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb: Account creation===<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
After that, you will receive an answer from Jan and you will be able to ssh the demo server without any password (only the passcode from the id_rsa key if you set one).<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following keys in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
"organization": "Organization Name"<br />
"flags": ["Flag 1", "Flag 2", ...]<br />
<br />
If no flags are required, one can specify an empty list or remove the key.<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. By default they are defined to be in wrfxctrl directories wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4331Running WRF-SFIRE with real data in the WRFx system2020-12-21T23:29:36Z<p>Afarguell: /* Configuration */</p>
<hr />
<div>Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/users/prepare_for_compilation.html for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains landuse, elevation, soiltype data, etc for WRF (geogrid.exe to be psecific).<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.continuum.io/archive/Anaconda3-2020.02-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2020.02-Linux-x86_64.sh<br />
./Anaconda3-2020.02-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda update -n base -c defaults conda<br />
conda create -n wrfx python=3 gdal netcdf4 pyproj paramiko dill h5py psutil proj4 pytz scipy matplotlib=3.2.2 flask pandas requests<br />
conda activate wrfx<br />
conda install -c conda-forge simplekml pygrib f90nml pyhdf xmltodict basemap rasterio<br />
pip install MesoPy python-cmr<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
and in wrxpy folder<br />
git checkout angel<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://simplekml.readthedocs.org/en/latest MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token from an [https://earthdata.nasa.gov/ Earthdata] user. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"appkey" : "token-from-earthdata"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running fuel moisture model, a new MesoWest user can be created in [https://mesowest.utah.edu/cgi-bin/droman/my_join.cgi?came_from=http://mesowest.utah.edu MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfxpy<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run real.exe in serial. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
Web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb: Account creation===<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
After that, you will receive an answer from Jan and you will be able to ssh the demo server without any password (only the passcode from the id_rsa key if you set one).<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following keys in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
"organization": "Organization Name"<br />
"flags": ["Flag 1", "Flag 2", ...]<br />
<br />
If no flags are required, one can specify an empty list or remove the key.<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. By default they are defined to be in wrfxctrl directories wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4330Running WRF-SFIRE with real data in the WRFx system2020-12-21T23:28:36Z<p>Afarguell: /* Configuration */</p>
<hr />
<div>Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/users/prepare_for_compilation.html for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains landuse, elevation, soiltype data, etc for WRF (geogrid.exe to be psecific).<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.continuum.io/archive/Anaconda3-2020.02-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2020.02-Linux-x86_64.sh<br />
./Anaconda3-2020.02-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda update -n base -c defaults conda<br />
conda create -n wrfx python=3 gdal netcdf4 pyproj paramiko dill h5py psutil proj4 pytz scipy matplotlib=3.2.2 flask pandas requests<br />
conda activate wrfx<br />
conda install -c conda-forge simplekml pygrib f90nml pyhdf xmltodict basemap rasterio<br />
pip install MesoPy python-cmr<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
and in wrxpy folder<br />
git checkout angel<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://simplekml.readthedocs.org/en/latest MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token from an [https://earthdata.nasa.gov/ Earthdata] user. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"appkey" : "token-from-earthdata"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running fuel moisture model, a new MesoWest user can be created in [https://mesowest.utah.edu/cgi-bin/droman/my_join.cgi?came_from=http://mesowest.utah.edu MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfxpy<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run real.exe in serial. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
Web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb: Account creation===<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
After that, you will receive an answer from Jan and you will be able to ssh the demo server without any password (only the passcode from the id_rsa key if you set one).<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following keys in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
"organization": "Organization Name"<br />
"flags": ["Flag 1", "Flag 2", ...]<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. By default they are defined to be in wrfxctrl directories wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4329Running WRF-SFIRE with real data in the WRFx system2020-11-16T22:27:05Z<p>Afarguell: /* Install necessary packages */</p>
<hr />
<div>Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/users/prepare_for_compilation.html for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains landuse, elevation, soiltype data, etc for WRF (geogrid.exe to be psecific).<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.continuum.io/archive/Anaconda3-2020.02-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2020.02-Linux-x86_64.sh<br />
./Anaconda3-2020.02-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda update -n base -c defaults conda<br />
conda create -n wrfx python=3 gdal netcdf4 pyproj paramiko dill h5py psutil proj4 pytz scipy matplotlib=3.2.2 flask pandas requests<br />
conda activate wrfx<br />
conda install -c conda-forge simplekml pygrib f90nml pyhdf xmltodict basemap rasterio<br />
pip install MesoPy python-cmr<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
and in wrxpy folder<br />
git checkout angel<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://simplekml.readthedocs.org/en/latest MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token from an [https://earthdata.nasa.gov/ Earthdata] user. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"appkey" : "token-from-earthdata"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running fuel moisture model, a new MesoWest user can be created in [https://mesowest.utah.edu/cgi-bin/droman/my_join.cgi?came_from=http://mesowest.utah.edu MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfxpy<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run real.exe in serial. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
Web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb: Account creation===<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
After that, you will receive an answer from Jan and you will be able to ssh the demo server without any password (only the passcode from the id_rsa key if you set one).<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following key in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. By default they are defined to be in wrfxctrl directories wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4328Running WRF-SFIRE with real data in the WRFx system2020-11-11T20:57:08Z<p>Afarguell: /* WRFx: wrfxweb */</p>
<hr />
<div>Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/users/prepare_for_compilation.html for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains landuse, elevation, soiltype data, etc for WRF (geogrid.exe to be psecific).<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.continuum.io/archive/Anaconda3-2020.02-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2020.02-Linux-x86_64.sh<br />
./Anaconda3-2020.02-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda update -n base -c defaults conda<br />
conda create -n wrfx python=3 gdal netcdf4 pyproj paramiko dill h5py psutil proj4 pytz scipy matplotlib=3.2.2 flask pandas<br />
conda activate wrfx<br />
conda install -c conda-forge simplekml pygrib f90nml pyhdf xmltodict basemap rasterio<br />
pip install MesoPy python-cmr<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
and in wrxpy folder<br />
git checkout angel<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://simplekml.readthedocs.org/en/latest MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token from an [https://earthdata.nasa.gov/ Earthdata] user. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"appkey" : "token-from-earthdata"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running fuel moisture model, a new MesoWest user can be created in [https://mesowest.utah.edu/cgi-bin/droman/my_join.cgi?came_from=http://mesowest.utah.edu MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfxpy<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run real.exe in serial. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
Web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb: Account creation===<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
After that, you will receive an answer from Jan and you will be able to ssh the demo server without any password (only the passcode from the id_rsa key if you set one).<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following key in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
<br />
Also, create a new simulations folder doing<br />
mkdir wrfxweb/fdds/simulations<br />
<br />
The next steps are going to be set in the desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. By default they are defined to be in wrfxctrl directories wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4327Running WRF-SFIRE with real data in the WRFx system2020-10-22T22:38:47Z<p>Afarguell: /* Install necessary packages */</p>
<hr />
<div>Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/users/prepare_for_compilation.html for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains landuse, elevation, soiltype data, etc for WRF (geogrid.exe to be psecific).<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.continuum.io/archive/Anaconda3-2020.02-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2020.02-Linux-x86_64.sh<br />
./Anaconda3-2020.02-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda update -n base -c defaults conda<br />
conda create -n wrfx python=3 gdal netcdf4 pyproj paramiko dill h5py psutil proj4 pytz scipy matplotlib=3.2.2 flask pandas<br />
conda activate wrfx<br />
conda install -c conda-forge simplekml pygrib f90nml pyhdf xmltodict basemap rasterio<br />
pip install MesoPy python-cmr<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
and in wrxpy folder<br />
git checkout angel<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://simplekml.readthedocs.org/en/latest MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token from an [https://earthdata.nasa.gov/ Earthdata] user. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"appkey" : "token-from-earthdata"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running fuel moisture model, a new MesoWest user can be created in [https://mesowest.utah.edu/cgi-bin/droman/my_join.cgi?came_from=http://mesowest.utah.edu MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfxpy<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run real.exe in serial. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
Web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb: Account creation===<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
After that, you will receive an answer from Jan and you will be able to ssh the demo server without any password (only the passcode from the id_rsa key if you set one).<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following key in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
<br />
The next steps are going to be set in a desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. By default they are defined to be in wrfxctrl directories wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4326Running WRF-SFIRE with real data in the WRFx system2020-10-02T22:54:42Z<p>Afarguell: /* Install necessary packages */</p>
<hr />
<div>Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/users/prepare_for_compilation.html for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it. Keep in mind that this is the file that contains landuse, elevation, soiltype data, etc for WRF (geogrid.exe to be psecific).<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.continuum.io/archive/Anaconda3-2020.02-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2020.02-Linux-x86_64.sh<br />
./Anaconda3-2020.02-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda update -n base -c defaults conda<br />
conda create -n wrfx python=3 gdal netcdf4 pyproj paramiko dill h5py psutil proj4 pytz scipy matplotlib=3.2.2 flask<br />
conda activate wrfx<br />
conda install -c conda-forge simplekml pygrib f90nml pyhdf xmltodict basemap rasterio<br />
pip install MesoPy python-cmr<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
Change to the directory where the wrfxpy repository has been created<br />
cd wrfxpy<br />
and in wrxpy folder<br />
git checkout angel<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://simplekml.readthedocs.org/en/latest MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token from an [https://earthdata.nasa.gov/ Earthdata] user. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"appkey" : "token-from-earthdata"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running fuel moisture model, a new MesoWest user can be created in [https://mesowest.utah.edu/cgi-bin/droman/my_join.cgi?came_from=http://mesowest.utah.edu MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. '''This is a separate file from the static data downloaded for WRF.''' To get the static data for the fuel moisture model, go to wrfxpy and do:<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
This dataset is needed for the fuel moisture data assimilation system. The fuel moisture model run as a part of WRF-SFIRE doesn't need this dataset and uses data processed by WPS.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfxpy<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run real.exe in serial. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
Web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb: Account creation===<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
After that, you will receive an answer from Jan and you will be able to ssh the demo server without any password (only the passcode from the id_rsa key if you set one).<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following key in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
<br />
The next steps are going to be set in a desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. By default they are defined to be in wrfxctrl directories wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=NASA_Disasters_Portal&diff=4325NASA Disasters Portal2020-09-18T14:23:08Z<p>Afarguell: </p>
<hr />
<div>GIS outputs of [https://github.com/openwfm/WRF-SFIRE WRF-SFIRE] and [https://github.com/openwfm/wrfxpy WRFx] system products provided to [https://maps.disasters.nasa.gov NASA Disasters Portal] which is an Esri GIS server.<br />
<br />
* '''Guatemala Fires:''' <br />
** Provided: Fire detections as pixel rectangles as KML file and SVM perimeters as KML file.<br />
** Products: [https://maps.disasters.nasa.gov/arcgis/home/webmap/viewer.html?webmap=c09e241b2c8d448a98735bad521aff8e Test App], [https://maps.disasters.nasa.gov/arcgis/home/item.html?id=c09e241b2c8d448a98735bad521aff8e Summary], and [https://maps.disasters.nasa.gov/arcgis/apps/webappviewer/index.html?id=48093253d2294a75bbf2ab0b1afc5cd3 Final App].<br />
<br />
[[File:Guatemala.png|500px|center]]<br />
<br />
* '''Alaska Fires:'''<br />
** Provided: Fire detections as pixel rectangles as KML file and Image of the pixels on Google Earth.<br />
** Products: [https://maps.disasters.nasa.gov/arcgis/home/item.html?id=6058f83a646f4cc5b97fba5db0f7eae5 Thumbnail Image].<br />
<br />
[[File:Alaska.png|500px|center]]<br />
<br />
* '''Pioneer Fire:'''<br />
** Provided: Prognostic variables from WRF-SFIRE as GeoTIFF files: PLUME_HEIGHT, PM25_INT, SMOKE_INT, WINDSPD, WINDVEC, FIRE_AREA, WINDSPD1000FT, and WINDVEC1000FT.<br />
** Products: [https://maps.disasters.nasa.gov/arcgis/home/webmap/viewer.html?webmap=f408c2eb059347418dd44897e018e9d3 Test App] and [https://maps.disasters.nasa.gov/arcgis/home/item.html?id=f408c2eb059347418dd44897e018e9d3 Summary].<br />
<br />
[[File:Pioneer.png|500px|center]]<br />
<br />
* '''Paraguay Fires:'''<br />
** Provided: Fire detections as pixel rectangles as KML file and SVM perimeters as KML file.<br />
<br />
* '''California Fires 2020:'''<br />
** Provided: HTTPS of wrfxweb webpage with only California Fires 2020 products. <br />
** Products: [https://maps.disasters.nasa.gov/arcgis/apps/MapSeries/index.html?appid=ed73bea7b38a499280ff9cb597f54cb6 HTTPS successfully integrated into the NASA Disasters Portal].<br />
<br />
[[File:Cali_20.png|500px|center]]</div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=File:Cali_20.png&diff=4324File:Cali 20.png2020-09-18T14:23:00Z<p>Afarguell: </p>
<hr />
<div></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4316Running WRF-SFIRE with real data in the WRFx system2020-08-17T20:36:55Z<p>Afarguell: /* Overview page */</p>
<hr />
<div>Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/users/prepare_for_compilation.html for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.continuum.io/archive/Anaconda3-2020.02-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2020.02-Linux-x86_64.sh<br />
./Anaconda3-2020.02-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda update -n base -c defaults conda<br />
conda create -n wrfx python=3 gdal netcdf4 pyproj paramiko dill h5py psutil proj4 pytz scipy matplotlib flask<br />
conda activate wrfx<br />
conda install -c conda-forge simplekml pygrib f90nml pyhdf xmltodict basemap rasterio<br />
pip install MesoPy python-cmr<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
and<br />
git checkout angel<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://simplekml.readthedocs.org/en/latest MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token from an [https://earthdata.nasa.gov/ Earthdata] user. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"appkey" : "token-from-earthdata"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running fuel moisture model, a new MesoWest user can be created in [https://mesowest.utah.edu/cgi-bin/droman/my_join.cgi?came_from=http://mesowest.utah.edu MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. So, inside wrfxpy do<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfxpy<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run real.exe in serial. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
Web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb: Account creation===<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
After that, you will receive an answer from Jan and you will be able to ssh the demo server without any password (only the passcode from the id_rsa key if you set one).<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following key in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
<br />
The next steps are going to be set in a desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. By default they are defined to be in wrfxctrl directories wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4315Running WRF-SFIRE with real data in the WRFx system2020-08-17T20:36:20Z<p>Afarguell: /* Monitoring page */</p>
<hr />
<div>Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/users/prepare_for_compilation.html for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.continuum.io/archive/Anaconda3-2020.02-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2020.02-Linux-x86_64.sh<br />
./Anaconda3-2020.02-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda update -n base -c defaults conda<br />
conda create -n wrfx python=3 gdal netcdf4 pyproj paramiko dill h5py psutil proj4 pytz scipy matplotlib flask<br />
conda activate wrfx<br />
conda install -c conda-forge simplekml pygrib f90nml pyhdf xmltodict basemap rasterio<br />
pip install MesoPy python-cmr<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
and<br />
git checkout angel<br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the queuing system, system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"qsys": "key from clusters.json",<br />
"wps_install_path": "/path/to/WPS",<br />
"wrf_install_path": "/path/to/WRF",<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG",<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki except the wget path, which needs to be specified to use a preferred version. To find the default wget,<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://simplekml.readthedocs.org/en/latest MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token from an [https://earthdata.nasa.gov/ Earthdata] user. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"appkey" : "token-from-earthdata"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running fuel moisture model, a new MesoWest user can be created in [https://mesowest.utah.edu/cgi-bin/droman/my_join.cgi?came_from=http://mesowest.utah.edu MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. So, inside wrfxpy do<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfxpy<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run real.exe in serial. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
Web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb: Account creation===<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
After that, you will receive an answer from Jan and you will be able to ssh the demo server without any password (only the passcode from the id_rsa key if you set one).<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following key in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
<br />
The next steps are going to be set in a desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host", "port", "root" are only examples but, for security reasons, you should choose different ones of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. By default they are defined to be in wrfxctrl directories wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
====Overview page====<br />
<br />
From most of the previous pages, you can navigate to the current jobs which shows a list of the jobs that are running and it allows the user to cancel or delete any simulation that has run or is running.<br />
<br />
<br />
[[File:Overview.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=File:Overview.png&diff=4314File:Overview.png2020-08-17T20:33:48Z<p>Afarguell: </p>
<hr />
<div></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4308Running WRF-SFIRE with real data in the WRFx system2020-08-13T18:35:22Z<p>Afarguell: /* Submission page */</p>
<hr />
<div>Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/users/prepare_for_compilation.html for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.continuum.io/archive/Anaconda3-2020.02-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2020.02-Linux-x86_64.sh<br />
./Anaconda3-2020.02-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda update -n base -c defaults conda<br />
conda create -n wrfx python=3 gdal netcdf4 pyproj paramiko dill h5py psutil proj4 pytz scipy matplotlib flask<br />
conda activate wrfx<br />
conda install -c conda-forge simplekml pygrib f90nml pyhdf xmltodict basemap rasterio<br />
pip install MesoPy python-cmr<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"wps_install_path": "/path/to/WPS"<br />
"wrf_install_path": "/path/to/WRF"<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG"<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki unless wget path which needs to be specified. If not version of wget is prefered<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://simplekml.readthedocs.org/en/latest MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token from an [https://earthdata.nasa.gov/ Earthdata] user. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"appkey" : "token-from-earthdata"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running fuel moisture model, a new MesoWest user can be created in [https://mesowest.utah.edu/cgi-bin/droman/my_join.cgi?came_from=http://mesowest.utah.edu MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. So, inside wrfxpy do<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfxpy<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run real.exe in serial. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
Web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb: Account creation===<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
After that, you will receive an answer from Jan and you will be able to ssh the demo server without any password (only the passcode from the id_rsa key if you set one).<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following key in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
<br />
The next steps are going to be set in a desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host" and "port" are only examples but, for security reasons, you should choose a different one of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. By default they are defined to be in wrfxctrl directories wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|402px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4307Running WRF-SFIRE with real data in the WRFx system2020-08-13T18:35:05Z<p>Afarguell: /* Submission page */</p>
<hr />
<div>Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/users/prepare_for_compilation.html for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.continuum.io/archive/Anaconda3-2020.02-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2020.02-Linux-x86_64.sh<br />
./Anaconda3-2020.02-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda update -n base -c defaults conda<br />
conda create -n wrfx python=3 gdal netcdf4 pyproj paramiko dill h5py psutil proj4 pytz scipy matplotlib flask<br />
conda activate wrfx<br />
conda install -c conda-forge simplekml pygrib f90nml pyhdf xmltodict basemap rasterio<br />
pip install MesoPy python-cmr<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"wps_install_path": "/path/to/WPS"<br />
"wrf_install_path": "/path/to/WRF"<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG"<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki unless wget path which needs to be specified. If not version of wget is prefered<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://simplekml.readthedocs.org/en/latest MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token from an [https://earthdata.nasa.gov/ Earthdata] user. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"appkey" : "token-from-earthdata"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running fuel moisture model, a new MesoWest user can be created in [https://mesowest.utah.edu/cgi-bin/droman/my_join.cgi?came_from=http://mesowest.utah.edu MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. So, inside wrfxpy do<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfxpy<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run real.exe in serial. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
Web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb: Account creation===<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
After that, you will receive an answer from Jan and you will be able to ssh the demo server without any password (only the passcode from the id_rsa key if you set one).<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following key in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
<br />
The next steps are going to be set in a desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host" and "port" are only examples but, for security reasons, you should choose a different one of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. By default they are defined to be in wrfxctrl directories wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|401px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=File:Submit3.png&diff=4306File:Submit3.png2020-08-13T18:34:26Z<p>Afarguell: Afarguell uploaded a new version of File:Submit3.png</p>
<hr />
<div></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=File:Submit3.png&diff=4305File:Submit3.png2020-08-13T18:33:02Z<p>Afarguell: Afarguell uploaded a new version of File:Submit3.png</p>
<hr />
<div></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=File:Submit3.png&diff=4304File:Submit3.png2020-08-13T18:27:40Z<p>Afarguell: Afarguell uploaded a new version of File:Submit3.png</p>
<hr />
<div></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4303Running WRF-SFIRE with real data in the WRFx system2020-08-13T18:20:25Z<p>Afarguell: /* Monitoring page */</p>
<hr />
<div>Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/users/prepare_for_compilation.html for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.continuum.io/archive/Anaconda3-2020.02-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2020.02-Linux-x86_64.sh<br />
./Anaconda3-2020.02-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda update -n base -c defaults conda<br />
conda create -n wrfx python=3 gdal netcdf4 pyproj paramiko dill h5py psutil proj4 pytz scipy matplotlib flask<br />
conda activate wrfx<br />
conda install -c conda-forge simplekml pygrib f90nml pyhdf xmltodict basemap rasterio<br />
pip install MesoPy python-cmr<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"wps_install_path": "/path/to/WPS"<br />
"wrf_install_path": "/path/to/WRF"<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG"<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki unless wget path which needs to be specified. If not version of wget is prefered<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://simplekml.readthedocs.org/en/latest MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token from an [https://earthdata.nasa.gov/ Earthdata] user. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"appkey" : "token-from-earthdata"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running fuel moisture model, a new MesoWest user can be created in [https://mesowest.utah.edu/cgi-bin/droman/my_join.cgi?came_from=http://mesowest.utah.edu MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. So, inside wrfxpy do<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfxpy<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run real.exe in serial. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
Web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb: Account creation===<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
After that, you will receive an answer from Jan and you will be able to ssh the demo server without any password (only the passcode from the id_rsa key if you set one).<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following key in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
<br />
The next steps are going to be set in a desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host" and "port" are only examples but, for security reasons, you should choose a different one of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. By default they are defined to be in wrfxctrl directories wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization.png|500px|center]]<br />
<br style="clear: both" /></div>Afarguellhttp://wiki.openwfm.org/mediawiki/index.php?title=Running_WRF-SFIRE_with_real_data_in_the_WRFx_system&diff=4302Running WRF-SFIRE with real data in the WRFx system2020-08-13T18:20:01Z<p>Afarguell: /* Monitoring page */</p>
<hr />
<div>Instructions to set up the whole WRFx system right now using the last version of all the components with a couple of working examples. WRFx consists of a Fortran coupled atmosphere-fire model [https://github.com/openwfm/wrf-fire WRF-SFIRE], a python automatic HPC system [https://github.com/openwfm/wrfxpy wrfxpy], a visualization web interface [https://github.com/openwfm/wrfxweb wrfxweb], and a simulation web interface [https://github.com/openwfm/wrfxctrl wrfxctrl].<br />
<br />
=WRF-SFIRE model=<br />
<br />
A coupled weather-fire forecasting model built on top of Weather Research and Forecasting (WRF).<br />
<br />
==WRF-SFIRE: Requirements and environment==<br />
<br />
===Install required libraries===<br />
* General requirements:<br />
** C-shell<br />
** Traditional UNIX utilities: zip, tar, make, etc.<br />
* WRF-SFIRE requirements:<br />
** Fortran and C compilers (Intel recomended)<br />
** MPI (compiled using the same compiler, usually comes with the system)<br />
** NetCDF libraries (compiled using the same compiler)<br />
* WPS requirements:<br />
** zlib compression library (zlib)<br />
** PNG reference library (libpng)<br />
** JasPer compression library<br />
** libtiff and geotiff libraries <br />
<br />
See https://www2.mmm.ucar.edu/wrf/users/prepare_for_compilation.html for the required versions of the libraries.<br />
<br />
===Set environment===<br />
Set specific libraries installed <br />
setenv NETCDF /path/to/netcdf<br />
setenv JASPERLIB /path/to/jasper/lib<br />
setenv JASPERINC /path/to/jasper/include<br />
setenv LIBTIFF /path/to/libtiff<br />
setenv GEOTIFF /path/to/libtiff<br />
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1<br />
Should your executables fail on unresolved libraries, also add all the library folders into your LD_LIBRARY_PATH:<br />
setenv LD_LIBRARY_PATH /path/to/netcdf/lib:/path/to/jasper/lib:/path/to/libtiff/lib:/path/to/geotiff/lib:$LD_LIBRARY_PATH<br />
<br />
==WRF-SFIRE: Installation==<br />
===Clone github repositories===<br />
Clone WRF-SFIRE and WPS github repositories<br />
git clone <nowiki>https://github.com/openwfm/WRF-SFIRE</nowiki><br />
git clone <nowiki>https://github.com/openwfm/WPS</nowiki><br />
<br />
===Configure WRF-SFIRE===<br />
cd WRF-SFIRE<br />
./configure<br />
<br />
Options 15 (INTEL ifort/icc dmpar) and 1 (simple nesting) if available<br />
<br />
===Compile WRF-SFIRE===<br />
Compile em_real<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
<br />
If any compilation error, compile em_fire<br />
./compile em_fire >& compile_em_fire.log & <br />
grep Error compile_em_fire.log<br />
<br />
If any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add to configure.wrf -nostdinc at the end of the CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Configure WPS===<br />
cd ../WPS<br />
./configure<br />
<br />
Option 17 (Intel compiler (serial)) if available<br />
<br />
===Compile WPS===<br />
./compile >& compile_wps.log &<br />
grep Error compile_wps.log<br />
<br />
and<br />
ls -l *.exe<br />
<br />
should contain geogrid.exe, metgrid.exe, and ungrib.exe. If not<br />
./clean -a<br />
./configure<br />
Add to configure.wps -nostdinc at the end of CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
===Get static data===<br />
Get tar file with the static data and untar it<br />
cd ..<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/WPS_GEOG.tbz</nowiki><br />
tar xvfj WPS_GEOG.tbz<br />
<br />
=WRFx system=<br />
<br />
==WRFx: Requirements and environment==<br />
<br />
===Install Anaconda distribution===<br />
Download and install the Python 3 [https://repo.continuum.io/archive Anaconda Python] distribution for your platform. We recommend an installation into the users' home directory.<br />
wget <nowiki>https://repo.continuum.io/archive/Anaconda3-2020.02-Linux-x86_64.sh</nowiki><br />
chmod +x Anaconda3-2020.02-Linux-x86_64.sh<br />
./Anaconda3-2020.02-Linux-x86_64.sh<br />
<br />
===Install necessary packages===<br />
We recommend the creation of an environment. Install pre-requisites:<br />
conda update -n base -c defaults conda<br />
conda create -n wrfx python=3 gdal netcdf4 pyproj paramiko dill h5py psutil proj4 pytz scipy matplotlib flask<br />
conda activate wrfx<br />
conda install -c conda-forge simplekml pygrib f90nml pyhdf xmltodict basemap rasterio<br />
pip install MesoPy python-cmr<br />
<br />
Note that conda and pip are package managers available in the Anaconda Python distribution.<br />
<br />
===Set environment===<br />
If you created the wrfx environment as shown above, check that PROJ_LIB path is pointing to<br />
$HOME/anaconda3/envs/wrfx/share/proj<br />
If not, you can try setting it to<br />
setenv PROJ_LIB "$HOME/anaconda3/share/proj"<br />
<br />
==WRFx: wrfxpy==<br />
<br />
WRF-SFIRE forecasting and data assimilation in python using an HPC environment.<br />
<br />
===wrfxpy: Installation===<br />
====Clone github repository====<br />
Clone wrfxpy repository<br />
git clone <nowiki>https://github.com/openwfm/wrfxpy</nowiki><br />
<br />
====General configuration====<br />
An etc/conf.json file must be created with the keys discussed below. A template file etc/conf.json.initial is provided as a starting point.<br />
<br />
cd wrfxpy<br />
cp etc/conf.json.initial etc/conf.json<br />
<br />
Configure the system directories, WPS/WRF-SFIRE locations, and workspace locations by editing the following keys in etc/conf.json:<br />
"wps_install_path": "/path/to/WPS"<br />
"wrf_install_path": "/path/to/WRF"<br />
"sys_install_path": "/path/to/wrfxpy"<br />
"wps_geog_path" : "/path/to/WPS_GEOG"<br />
"wget" : /path/to/wget"<br />
<br />
Note that all these paths are created from previous steps of this wiki unless wget path which needs to be specified. If not version of wget is prefered<br />
which wget<br />
<br />
====Cluster configuration====<br />
<br />
Next, wrfxpy needs to know how jobs are submitted on your cluster. Create an entry for your cluster in etc/clusters.json, here we use speedy as an example:<br />
<br />
{<br />
"speedy" : {<br />
"qsub_cmd" : "qsub",<br />
"qdel_cmd" : "qdel",<br />
"qstat_cmd" : "qstat",<br />
"qstat_arg" : "",<br />
"qsub_delimiter" : ".",<br />
"qsub_job_num_index" : 0,<br />
"qsub_script" : "etc/qsub/speedy.sub"<br />
}<br />
}<br />
<br />
And then the file etc/qsub/speedy.sub should contain a submission script template, that makes use of the following variables supplied by wrfxpy based on job configuration:<br />
<br />
%(nodes)d the number of nodes requested<br />
%(ppn)d the number of processors per node requested<br />
%(wall_time_hrs)d the number of hours requested<br />
%(exec_path)d the path to the wrf.exe that should be executed<br />
%(cwd)d the job working directory<br />
%(task_id)d a task id that can be used to identify the job<br />
%(np)d the total number of processes requested, equals nodes x ppn<br />
<br />
Note that not all keys need to be used, as shown in the speedy example:<br />
<br />
#$ -S /bin/bash<br />
#$ -N %(task_id)s<br />
#$ -wd %(cwd)s<br />
#$ -l h_rt=%(wall_time_hrs)d:00:00<br />
#$ -pe mpich %(np)d<br />
mpirun_rsh -np %(np)d -hostfile $TMPDIR/machines %(exec_path)s<br />
<br />
The script template should be derived from a working submission script.<br />
<br />
Note: wrfxpy has already configuration for colibri, gross, kingspeak, and cheyenne.<br />
<br />
====Tokens configuration====<br />
<br />
When running wrfxpy, sometimes the data needs to be accessed and downloaded using a specific token created for the user. For instance, in the case of running the Fuel Moisture Model, one needs a token from a valid [https://simplekml.readthedocs.org/en/latest MesoWest] user to download data automatically. Also, when downloading satellite data, one needs a token from an [https://earthdata.nasa.gov/ Earthdata] user. All of these can be specified with the creation of the file etc/tokens.json from the template etc/tokens.json.initial containing:<br />
<br />
{<br />
"mesowest" : "token-from-mesowest",<br />
"appkey" : "token-from-earthdata"<br />
}<br />
<br />
So, if any of the previous capabilities are required, create a token from the specific page, do<br />
<br />
cp etc/tokens.json.initial etc/tokens.json<br />
<br />
and edit the file to include your previously created token.<br />
<br />
For running fuel moisture model, a new MesoWest user can be created in [https://mesowest.utah.edu/cgi-bin/droman/my_join.cgi?came_from=http://mesowest.utah.edu MesoWest New User]. Then, the token can be acquired and replaced in the etc/tokens.json file.<br />
<br />
For acquiring satellite data, a new Earthdata user can be created in [https://urs.earthdata.nasa.gov/users/new Earthdata New User]. Then, the token can be acquired and replaced in the etc/tokens.json file. There are some data centers that need to be accessed using the $HOME/.netrc file. Therefore, creating the $HOME/.netrc file is recommended as follows<br />
<br />
machine urs.earthdata.nasa.gov<br />
login your_earthdata_id<br />
password your_earthdata_password<br />
<br />
====Get static data====<br />
<br />
When running WRF-SFIRE simulations, one needs to use high-resolution elevation and fuel category data. If you have a GeoTIFF file for elevation and fuel, you can specify the location of these files using etc/vtables/geo_vars.json. So, you can do<br />
<br />
cp etc/vtables/geo_vars.json.initial etc/vtables/geo_vars.json<br />
<br />
and add the absolute path to your GeoTIFF files. The routine is going to automatically process these files and convert them into geogrid files to fit WPS. If you need to map the categories from the GeoTIFF files to the 13 Rothermel categories, you can modify the dictionary _var_wisdom on file src/geo/var_wisdom.py to specify the mapping. By default, the categories form the LANDFIRE dataset are going to be mapped according to 13 Rothermel categories. You can also specify what categories you want to interpolate using nearest neighbors. Therefore, the ones that you cannot map to 13 Rothermel categories. Finally, you can specify what categories should be no burnable using category 14.<br />
<br />
To get GeoTIFF files from CONUS, you can use the LANDFIRE dataset following the steps on [[How_to_run_WRF-SFIRE_with_real_data#Obtaining_data_for_geogrid]]. Or you can just use the GeoTIFF files included in the static dataset WPS_GEOG/fuel_cat_fire and WPS_GEOG/topo_fire specifying in etc/vtables/geo_vars.json<br />
<br />
{<br />
"NFUEL_CAT": "/path/to/WPS_GEOG/fuel_cat_fire/lf_data.tif",<br />
"ZSF": "/path/to/WPS_GEOG/topo_fire/ned_data.tif"<br />
}<br />
<br />
For running fuel moisture model, terrain static data is needed. So, inside wrfxpy do<br />
wget <nowiki>http://math.ucdenver.edu/~farguella/tmp/static.tbz</nowiki><br />
tar xvfj static.tbz<br />
this will untar a static folder with the static terrain on it.<br />
<br />
===wrxpy: Testing===<br />
====Simple forecast====<br />
At this point, one should be able to run wrfxpy with a simple example:<br />
conda activate wrfxpy<br />
./simple_forecast.sh<br />
Press enter at all the steps to set everything to the default values until the queuing system, then we select the cluster we configure (speedy in the example).<br />
<br />
This will generate a job under jobs/experiment.json (or the name of the experiment that we chose).<br />
<br />
Then, we can run our first forecast by<br />
./forecast.sh jobs/experiment.json >& logs/experiment.log &<br />
<br />
Show generate the experiment in the path specified in the etc/conf.json file and under a folder using the experiment name. The file logs/experiment.log should show the whole process step by step without any error.<br />
<br />
====Fuel moisture model====<br />
If tokens.json is set, "mesowest" token is provided, and static data is gotten, you can run<br />
./rtma_cycler.sh anything >& logs/rtma_cycler.log &<br />
which will download all the necessary weather stations and estimate the fuel moisture model in the whole continental US.<br />
<br />
===wrfxpy: Possible errors===<br />
====real.exe fails====<br />
<br />
Depending on the cluster, wrfxpy could fail when tries to execute ./real.exe. This happens on systems that do not allow executing MPI binary from the command line. We do not run real.exe by mpirun because mpirun on head node may not be allowed. Then, one needs to provide an installation of WRF-SFIRE in serial mode in order to run real.exe in serial. In that case, we want to repeat the [[Setting_up_current_WRFx_system#Installation|previous steps]] but using the serial version of WRF-SFIRE<br />
cd ..<br />
git clone <nowiki>https://github.com/openwfm/wrf-fire wrf-fire-serial</nowiki><br />
cd wrf-fire-serial/wrfv2_fire<br />
./configure<br />
Options 13 (INTEL ifort/icc serial) and 0 (no nesting)<br />
<br />
./compile em_real >& compile_em_real.log & <br />
grep Error compile_em_real.log<br />
Again, if any of the previous step fails: <br />
./clean -a<br />
./configure<br />
Add -nostdinc in CPP flag, and repeat compilation. If this does not solve compilation, look for issues in your environment.<br />
<br />
Note: This time, we only need to compile em_real because we only need real.exe. However, if you want to test serial vs parallel for any reason, you can proceed to compile em_fire the same way.<br />
<br />
Then, we need to add this path in etc/conf.json file in wrfxpy, so<br />
cd ../wrfxpy<br />
and add to etc/conf.json file key<br />
"wrf_serial_install_path": "/path/to/WRF/serial"<br />
<br />
This should solve the problem, if not check log files from previous compilations.<br />
<br />
==WRFx: wrfxweb==<br />
<br />
Web-based visualization system for imagery generated by wrfxpy.<br />
<br />
===wrfxweb: Account creation===<br />
<br />
Create ~/.ssh directory (if you have not one)<br />
mkdir ~/.ssh<br />
cd ~/.ssh<br />
<br />
Create an id_rsa key (if you have not one) doing<br />
ssh-keygen<br />
and following all the steps (you can select defaults, so always press enter). <br />
<br />
Send an email to Jan Mandel (jan.mandel@gmail.com) asking for the creation of an account in demo server providing: <br />
* Purpose of your request (including information about you)<br />
* User id you would like (user_id)<br />
* Short user id you would like (short_user_id)<br />
* Public key (~/.ssh/id_rsa.pub file previously created)<br />
<br />
After that, you will receive an answer from Jan and you will be able to ssh the demo server without any password (only the passcode from the id_rsa key if you set one).<br />
<br />
===wrfxweb: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxweb repository in the demo server<br />
ssh user_id@demo.openwfm.org<br />
git clone <nowiki>https://github.com/openwfm/wrfxweb.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxweb<br />
cp etc/conf.json.template etc/conf.json<br />
<br />
Configure the following key in etc/conf.json:<br />
"url_root": "<nowiki>http://demo.openwfm.org/short_user_id</nowiki>"<br />
<br />
The next steps are going to be set in a desired installation of wrfxpy (generated in the previous section). <br />
<br />
Configure the following keys in etc/conf.json in any wrfxpy installation<br />
"shuttle_ssh_key": "/path/to/id_rsa"<br />
"shuttle_remote_user": "user_id"<br />
"shuttle_remote_host": "demo.openwfm.org"<br />
"shuttle_remote_root": "/path/to/remote/storage/directory"<br />
"shuttle_lock_path": "/tmp/short_user_id"<br />
<br />
The "shuttle_remote_root" key is usually defined as "/home/user_id/wrfxweb/fdds/simulations". So, everything should be ready to send post-processing simulations into the visualization server.<br />
<br />
===wrfxweb: Testing===<br />
====Simple forecast====<br />
Finally, one can repeat the previous [[Setting_up_current_WRFx_system#Simple_forecast|simple forecast test]] but when simple forecast asks<br />
Send variables to visualization server? [default=no]<br />
you will answer yes.<br />
<br />
Then, you should see your simulation post-processed time steps appearing in real-time on http://demo.openwfm.org under your short_user_id.<br />
<br />
====Fuel moisture model====<br />
The [[Setting_up_current_WRFx_system#Fuel_moisture_model|fuel moisture model test]] can be also run and a special visualization will appear on http://demo.openwfm.org under your short_user_id.<br />
<br />
==WRFx: wrfxctrl==<br />
<br />
A website that enables users to submit jobs to the wrfxpy framework for fire simulation.<br />
<br />
===wrfxctrl: Installation===<br />
<br />
====Clone github repository====<br />
Clone wrfxctrl repository in your cluster<br />
git clone <nowiki>https://github.com/openwfm/wrfxctrl.git</nowiki><br />
<br />
====Configuration====<br />
<br />
Change directory and copy template to create new etc/conf.json<br />
cd wrfxctrl<br />
cp etc/conf-template.json etc/conf.json<br />
<br />
Configure following keys in etc/conf.json<br />
"host" : "127.1.2.3",<br />
"port" : "5050",<br />
"root" : "/short_user_id/",<br />
"wrfxweb_url" : "<nowiki>http://demo.openwfm.org/short_user_id/</nowiki>",<br />
"wrfxpy_path" : "/path/to/wrfxpy",<br />
"jobs_path" : "/path/to/jobs",<br />
"logs_path" : "/path/to/logs",<br />
"sims_path" : "/path/to/sims"<br />
Notes: <br />
* Entries "host" and "port" are only examples but, for security reasons, you should choose a different one of your own and as random as possible. <br />
* Entries "jobs_path", "logs_path", and "sims_path" are recommended to be removed. By default they are defined to be in wrfxctrl directories wrfxctrl/jobs, wrfxctrl/logs, and wrfxctrl/simulations.<br />
<br />
===wrfxctrl: Testing===<br />
<br />
====Running wrfxctrl====<br />
<br />
Activate conda environment and run wrfxctrl.py doing<br />
<br />
conda activate wrfx<br />
python wrfxctrl.py <br />
<br />
This will show a message similar to <br />
<br />
Welcome page is <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki><br />
* Serving Flask app "wrfxctrl" (lazy loading)<br />
* Environment: production<br />
WARNING: This is a development server. Do not use it in a production deployment.<br />
Use a production WSGI server instead.<br />
* Debug mode: off<br />
INFO:werkzeug: * Running on <nowiki>http://127.1.2.3:5050/</nowiki> (Press CTRL+C to quit)<br />
<br />
====Starting page====<br />
<br />
Now you can go to your favorite internet browser and navigate to <nowiki>http://127.1.2.3:5050/short_user_id/start</nowiki> webpage. This will show you a screen similar than that<br />
<br />
[[File:Start.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
This starting page shows general information of the cluster and provides an option of starting a new fire using ''Start a new fire'' button and browsing the existent jobs using the ''Show current jobs'' button.<br />
<br />
====Submission page====<br />
<br />
From the previous page, if you select ''Start a new fire'', you will be able to access the submission page. In this page, you can specify 1) a short description of the simulation, 2) the ignition location clicking in an interactive map or specifying the degree lat-lon coordinates, 3) the ignition time and the forecast length, 4) the simulation profile which defines the number of domains with their resolutions and sizes and the atmospheric boundary conditions data. Finally, once you have all the simulation options defined, you can scroll down to the end (next figure) and select the ''Ignite'' button. This will automatically show the monitor page where you will be able to track the progress of the simulation. See the image below to see an example of a simulation submission.<br />
<br />
<br />
[[File:Submit1.png|400px|center]][[File:Submit2.png|400px|center]][[File:Submit3.png|400px|center]]<br />
<br style="clear: both" /><br />
<br />
====Monitoring page====<br />
<br />
At the beginning of the monitoring page, you will see a list of important information about the simulation (see figure below). After the information, there is a list of steps with their current status. The different possible statuses are: <br />
<br />
* Waiting (grey): Represent that the process has not started and needs to wait for the other process. All the processes are initialized with this status.<br />
* Running (yellow): Represent that the process is still running so in progress. All processes switch their status from Waiting to Running when they start running.<br />
* Success (green): Represent that the process finished well. All processes switch their status from Running to Success when they finish running successfully.<br />
* Available (green): Represent that some part is done and some other is still in progress. This status is only used by the Output process because the visualization is available once the process starts running.<br />
* Failed (red): Represent that the process finished with a failure. All processes switch their status from Running to Failed when they finish running with a failure.<br />
<br />
In the monitor page, the log file can be also retrieved clicking the ''Retrieve log'' button at the end of the page, which provides a scroll down window with the log file information.<br />
<br />
<br />
[[File:Monitor1.png|502px|center]][[File:Monitor2.png|500px|center]][[File:Monitor3.png|500px|center]]<br />
<br style="clear: both" /><br />
<br />
Finally, once the Output process becomes Available, in the ''Visualization'' element of the information section will appear a link to the simulation in the web server generated using wrfxweb. In this page, one can interactively plot the results in real-time while the simulation is still running<br />
<br />
<br />
[[File:Visualization|500px|center]]<br />
<br style="clear: both" /></div>Afarguell