Difference between revisions of "WRF-SFIRE and WRFx on Alderaan"
Line 1: | Line 1: | ||
[[Category:WRF-SFIRE]] | [[Category:WRF-SFIRE]] | ||
− | + | ||
==Initial setup== | ==Initial setup== | ||
Following Atipa User Guide Phoenix | Following Atipa User Guide Phoenix | ||
Line 45: | Line 45: | ||
The guide says "Intel Math Kernel Library is installed on all Atipa clusters in /opt/intel/cmkl" but there is no such thing. | The guide says "Intel Math Kernel Library is installed on all Atipa clusters in /opt/intel/cmkl" but there is no such thing. | ||
+ | |||
+ | ===Modules=== | ||
But modules are there: | But modules are there: | ||
Line 57: | Line 59: | ||
</pre> | </pre> | ||
+ | ===MPI and scheduler=== | ||
Copied slurm_submit.sh from the guide, made minor changes | Copied slurm_submit.sh from the guide, made minor changes | ||
<pre> | <pre> | ||
Line 82: | Line 85: | ||
[jmandel@math-alderaan examples]$ sbatch slurm_submit.sh | [jmandel@math-alderaan examples]$ sbatch slurm_submit.sh | ||
</pre> | </pre> | ||
+ | |||
+ | ==Building WRF-SFIRE== | ||
+ | Following [[Running WRF-SFIRE with real data in the WRFx system]] |
Revision as of 21:48, 2 August 2021
Initial setup
Following Atipa User Guide Phoenix
SSH
There was no .ssh directory in my account. Then passwordless ssh to compute nodes does not work contrary to the guide page 8. Of course commands over compute nodes such as fornodes -s “ps aux | grep user” also do not work.
Setting up ssh:
ssh-keygen cat id_rsa.pub >> authorized_keys
Passwordless ssh to head node works fine. Passwordless ssh to compute nodes was disabled for regular users intentionally.
Compilers
[jmandel@math-alderaan ~]$ gcc --version gcc (GCC) 8.3.1 20191121 (Red Hat 8.3.1-5) [jmandel@math-alderaan ~]$ ls -l /shared drwxr-xr-x. 3 root root 17 Mar 5 20:21 aocl-linux-gcc-2.2-5 drwxr-xr-x. 3 root root 23 Mar 5 20:20 jemalloc-5.2.1 drwxr-xr-x 7 root root 80 Mar 19 06:26 modulefiles drwxr-xr-x. 3 root root 23 Mar 5 20:27 openmpi-4.1.0 drwxr-xr-x 3 root root 23 Mar 19 06:22 openmpi-4.1.0-cuda ls /shared/openmpi-4.1.0/ gcc-9.2.1 ls /shared/openmpi-4.1.0/gcc-9.2.1/bin mpiCC mpicxx mpif90 ompi-clean opal_wrapper orte-server orterun oshcc oshmem_info shmemc++ shmemfort mpic++ mpiexec mpifort ompi-server orte-clean ortecc oshCC oshcxx oshrun shmemcc shmemrun mpicc mpif77 mpirun ompi_info orte-info orted oshc++ oshfort shmemCC shmemcxx
So gcc is at 8.3.1 and there is no other compiler on the system. Openmpi seems compiled with gcc 9 though. Created .bash_profile with the line
PATH="/shared/openmpi-4.1.0/gcc-9.2.1/bin:$PATH"
Copied example files
mkdir test cp -a /opt/phoenix/doc/examples test
Fixed missing int before main in mpi-example.c (in the guide called mpihello.c).
The guide says "Intel Math Kernel Library is installed on all Atipa clusters in /opt/intel/cmkl" but there is no such thing.
Modules
But modules are there:
[jmandel@math-alderaan examples]$ module avail ------------------------------------------------ /shared/modulefiles ------------------------------------------------ aocl/2.2-5 gcc/9.2.1 jemalloc/5.2.1/gcc/9.2.1 openmpi-cuda/4.1.0/gcc/9.2.1 openmpi/4.1.0/gcc/9.2.1
changing .bash_profile to
module load gcc/9.2.1 openmpi/4.1.0/gcc/9.2.1
MPI and scheduler
Copied slurm_submit.sh from the guide, made minor changes
[jmandel@math-alderaan examples]$ mpicc mpi-example.c [jmandel@math-alderaan examples]$ cat slurm_submit.sh #!/bin/bash ### Sets the job's name. #SBATCH --job-name=mpihello ### Sets the job's output file and path. #SBATCH --output=mpihello.out.%j ### Sets the job's error output file and path. #SBTACH --error=mpihello.err.%j ### Requested number of nodes for this job. Can be a single number or a range. #SBATCH -N 4 ### Requested partition (group of nodes, i.e. compute, fat, gpu, etc.) for the resource allocation. #SBATCH -p compute ### Requested number of tasks to be invoked on each node. #SBATCH --ntasks-per-node=4 ### Limit on the total run time of the job allocation. #SBATCH --time=10:00 ### Amount of real memory required per node. #SBATCH --mem-per-cpu=100 module list mpirun a.out [jmandel@math-alderaan examples]$ sbatch slurm_submit.sh
Building WRF-SFIRE
Following Running WRF-SFIRE with real data in the WRFx system