Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

Ndown running for days

gfarache

New member
Hello. I'm running WRF for the 3 nested following meshes with ndown.
First mesh: 5.55km resolution - 411x391 nodes
Second mesh: 1.11km resolution - 1496x1411 nodes
Third mesh: 370m resolution - 3829x3451 nodes
The third mesh is a subregion of the second one and the second a subregion of the first one.

I choose to run with ndown because the larger domain has a small number of nodes, which limits the number of parallel proccess to the other domains. The first domain has 5.55km of resolution, the other has 1.11km and the inner one has 370m.
The first domain (5.55km) was simulated with WRF without any problem. Running ndown to get the boundary conditions to the second domain (1.11km) ran ok too, and then I could run WRF to the second domain without problem. But when I'm trying to run ndown to the third domain (370m), it doesn't end running. Ndown is running to generate the boundary conditions of the third domain for 3 days in a row, and it hasn't ended yet. What could it be?
I'm running ndown in parallel with 256 cores.
 
Last edited:
Anyone could help me? I cancelled the execution and tried to execute ndown in serial mode, without MPI to check if it is a parallel problem. The serial execution is already executing for 1 day.
I don't know if it could be a namelist misconfiguration or, by the size of the inner mesh, it is really a expensive process.
 
Last edited:
Hi,
Can you attach the namelist.input files you used for the 3 different domain simulations (i.e., the initial coarse-resolution run, when running from d01 to d02, and then the one causing the problem, d02 to d03)? Can you also attach your rsl.* files for the d02 to d03 simulation - from when you run it with distributed memory? Assuming there are multiple files, please package them into a single *.tar file and just attach that. Thanks!
 
Top