Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

Installing WRF in Derecho

Shuai Li

New member
Currently, I have cp the pre-compiled WRFV4.5 and WPSV4.5 to my path. What else do I need to do to successfully use the WRF model?
Because I found that I was not able to successfully carry out the tutorial cases.
 
Hi,
From what directory did you copy the pre-compiled code? The new "derecho"-specific compiled code is in /glade/u/home/wrfhelp/derecho_pre_compiled_code. If that is where you got it from, but you were getting an error with ungrib, can you set the following environment variable. For e.g., if you have a bash environment, set
Code:
export LD_LIBRARY_PATH=/glade/u/home/wrfhelp/UNGRIB_LIBRARIES/lib:$LD_LIBRARY_PATH
and then try to re-run ungrib to see if that helps. If it does, you may want to put that in your .bashrc file (or whichever file you use in your home directory).

If you're still having issues, can you be specific about what the problem is? Thanks!
 
Hi,
From what directory did you copy the pre-compiled code? The new "derecho"-specific compiled code is in /glade/u/home/wrfhelp/derecho_pre_compiled_code. If that is where you got it from, but you were getting an error with ungrib, can you set the following environment variable. For e.g., if you have a bash environment, set
Code:
export LD_LIBRARY_PATH=/glade/u/home/wrfhelp/UNGRIB_LIBRARIES/lib:$LD_LIBRARY_PATH
and then try to re-run ungrib to see if that helps. If it does, you may want to put that in your .bashrc file (or whichever file you use in your home directory).

If you're still having issues, can you be specific about what the problem is? Thanks!
Thanks! I'm able to run wps based on the derecho pre-compiled code.
But now the problem is that when I type the following command, the system reports that No host list provided.
What is the reason for this situation and how can I solve it?
Thanks again : )
 
Did you mean to show the command you type that gives that error?
I used the Derecho HPC, and am able to run WPS.
But now the problem is that when I type "mpirun -np 4 ./real.exe" to run my case with 4 processors, the system reports that "No host list provided".
I also encountered an error when trying to use PBS script, as follows:
“qsub: Invalid account for CPU usage, available accounts:
Project, Status, Active
XXXXXX, Normal, True“
The PBS script I used is as follows:
#!/bin/csh
### Project name
#PBS -A Proj#XXXXXX
### Job name
#PBS -N geogrid
### Wallclock time
#PBS -l walltime=00:30:00
### Queue
#PBS -q economy
### Merge output and error files
#PBS -j oe
### Select 2 nodes with 36 CPUs, for 72 MPI processes
#PBS -l select=2:ncpus=36:mpiprocs=36
mpiexec_mpt ./geogrid.exe
I want to know what problem caused this?
Is it because I am only a sub-account of the project and do not have permission to call mpirun?
 
Ok, thank you for the clarification. I would advise to reach out to the CISL support group to see if they are able to assist with this because this is probably more related to the system and your environment, than to the WRF model.
 
In case others find this, the issue here appears to be PBS job script specifications on Derecho. Namely, the new queue should be `main`. See below:

  • PBS Job Submission: PBS is use to launch jobs on Derecho similar to Cheyenne. Job submission and monitoring via qsub, qstat, qdel etc... are similar between the systems.
    • Queues: On Derecho the default PBS queue main takes the place of the three queues regular, premium, and economy on Cheyenne.
    • Job Priority: On Derecho users request a specific job priority via the #PBS -l job_priority=<regular|premium|economy> resource directive, instead of through distinct queues.
    • select statements for CPU jobs: Derecho CPU nodes have 128 cores. A typical PBS resource selection statement is #PBS -l select=10:ncpus=128:mpiprocs=128:eek:mpthreads=1.
    • Memory: Each CPU node has a maximum of 235GB of RAM available for user jobs. Derecho CPU nodes are all identical - there is no notion of largemem or smallmem CPU nodes. Users requiring more memory per core than the 235GB/128 default configuration allows will need to under-subscribe CPU nodes, that is, leave some cores idle in order to increase the effective memory-per-utilized-core.
 
Top