We’re running WRF using MPI on 16 processors which is the number we arrived at by using your guidance on how many processors to use given the size of our domain. Each Cheyenne node has 36 cores, so we should be able to run 2 WRF exes on each node, correct? Right now, in our PBS submit script we have the line:
#PBS -l select=1:ncpus=16:mpiprocs=16
And then:
mpiexec_mpt dplace -s 1 ./wrf.exe >& wrf.log
for the exe line.
Is there a quick and easy way to change our submit process so we can cut our CPU hour charges in half and put 2 of our WRF exes on the same node, each using 16 cores for MPI? Is it using a command file with two different wrf.exes in different directories and then increasing the ncpus and mpiproces to 32?
Thanks,
Pat
#PBS -l select=1:ncpus=16:mpiprocs=16
And then:
mpiexec_mpt dplace -s 1 ./wrf.exe >& wrf.log
for the exe line.
Is there a quick and easy way to change our submit process so we can cut our CPU hour charges in half and put 2 of our WRF exes on the same node, each using 16 cores for MPI? Is it using a command file with two different wrf.exes in different directories and then increasing the ncpus and mpiproces to 32?
Thanks,
Pat