How to create a fire portal compute server
The fire portal compute server is the backend of [1], which is responsible for the computational tasks of generating the fire outputs. This page describes how to install all the necessary software on a fresh install of Ubuntu Server 12.4. Installing on a different operating system is also possible, but the user must figure out the correct package names from the package manager. In general, the following software is necessary to run the compute server:
- Tools to build WRF
- fortran compiler
- make
- m4
- tcsh
- perl
- netcdf development libraries
- Geotiff development libraries
- jasper development libraries
- png development libraries
- Tools for running the compute server
- python version 2.6 or 2.7 with development libraries
- git
- openssh server and client
- GDAL with python support
- Python modules (most can be found in the package manager or easy_install/pip)
- netCDF4
- GitPython
- percache
- mechanize
- BeautifulSoup
- simplekml
OS and software installation
For the remainder of this guide, it is assumed that you have installed Ubuntu Server 12.4 with user name wildfire. Once you are logged in to the terminal, it is recommended that you do a system update and restart.
sudo apt-get update sudo apt-get dist-upgrade sudo restart
Now you need to install required software from the apt repository.
sudo apt-get install python-pip gfortran libnetcdf-dev netcdf-bin \ libhdf5-serial-dev python-dev python-matplotlib \ python-matplotlib-data tcsh git build-essential m4 \ libcloog-ppl10 openssh-server libjasper-dev libpng-dev \ libgeotiff-dev gdal-bin python-gdal
Next, install python modules available in pip.
sudo pip install netCDF4 GitPython percache mechanize BeautifulSoup simplekml
Set the ssh server to start by default.
sudo update-rc.d ssh defaults
Add environment variables to the shell rc file.
echo 'export NETCDF=/usr' >> ~/.bashrc echo 'export LIBTIFF=/usr' >> ~/.bashrc echo 'export GEOTIFF=/usr' >> ~/.bashrc
Finally, reboot the computer.
sudo reboot
WRF and server code checkout and configuration
Checkout software from our git repository.
git clone repo.openwfm.org:/home/git/wrf-fire.git git clone repo.openwfm.org:/home/git/WRF-GoogleEarth.git
Now, you need to compile WRF and WPS to generate a template run directory that the compute server will use to execute the simulation.
cd wrf-fire/wrfv2_fire echo '9 1' | ./configure ./compile em_real cd ../WPS echo 6 | ./configure ./compile cd ..
Now, generate template tarball firesim.tar in ~/wrf-fire.
~/WRF-GoogleEarth/computeserver/makeTemplate.sh
Next, you will need to create a file in the home directory called .globalConfig.txt. This contains basic information about that the computer setup that the server will use to execute the simulation. A template of this file can be found in ~/WRF-GoogleEarth/computeserver/globalConfig.txt. For this setup, you can paste the following into this file:
[server] mainDir=/home/wildfire/run repoDir=/home/wildfire/wrf-fire wrfConfig=%(repoDir)s/wrfv2_fire/configure.wrf wpsConfig=%(repoDir)s/WPS/configure.wps netCDFpath=/usr wpsData=/home/wildfire/geog ldPath= templateDir=/home/wildfire/wrf-fire/firesim.tar [cache] cachefile=/home/wildfire/cache/cache.db livesync=True filestore=/home/wildfire/cache
Finally, create directories that the server will use for execution.
mkdir ~/cache mkdir ~/log mkdir ~/run
Testing the installation
To make sure everything is working correctly, create a test parameter file and run the simulation code.
mkdir results mkdir test cd test echo 'centerLat=40.07160929510595 centerLon=-105.9405756939366 ign_radius=10 dx=100 dy=100 nx=40 ny=41 ignLat=40.07160929510595 ignLon=-105.9405756939366 ignTime=2011-10-05_00:01:00 runTime=1 spinupTime=.001 map_proj=lambert uploadpath=localhost:results' > params.txt python ~/WRF-GoogleEarth/computeserver/fireSim.py testsession params.txt
This will run a simulation and output kmz files to ~/results. Check logs in ~/log and the run directory in ~/run/testsession in case any errors occur.
Connecting to a web server
At this point, the compute server is fully configured. All that remains is to ensure that there is a way to ssh into the web server without a password using the ~/.ssh/authorized_keys file. The web server must also be able to log into the compute server passwordlessly as well. See the web server configuration guide for information on how to connect the portal to this compute server.