Hello,
I have run WRF 4.4.1 (with dm+sm) successfully for a 9-3-1 km nested domain. I am now trying to use hourly output for the inner 1 km domain together with ndown to downscale a small part of my inner domain to 333 m resolution. I have successfully created met_em files as well as a wrfinput file for the new 333 m domain using real.exe. I renamed the wrfinput file to wrfndi_d02 and checked that the fields look reasonable. I ran ndown successfully and renamed the ending of the wrfinput and wrfbdy files to d01 before I ran srun ./wrf.exe in a batch script on slurm for my 333 m domain only. After some time (a few minutes to a few hours, depending on how many processors I choose) the job stops, and only the first output file is created. I cannot find any error in the rsl.error files.
The original run was with sst_update=1, but I don't know how to create new wrflowinp files with ndown. As I don't need updated SSTs for this new run, I changed to sst_update=0. I assume this is not part of the problem.
My original 9-3-1 km nested run had a time_step = 30 with a parent_time_step_ratio = 1, 3, 3. When running ndown for my 1 km - 333 m domains, I let time_step = 30 remain the same, but adjusted the parent_time_step_ratio to = 9, 3, which I think should be consistent for the 1 km domain in the original run (because 3x3=9).
Do you have any idea on what's wrong?
Attached are the namelist (for ndown and for wrf afterwards) together with the first rsl.error file.
P.S: My domain contains complex terrain. I have used 3 smooth passes with the 1-2-1 smoothing option. In an earlier simulation, I successfully ran wrf for 3.6-1.2-0.4 km for almost the same region without any cfl errors. As the 0.4 km domain is very similar to my new 333 m domain, I expect it is possible to run my new 333 m domain in a stable way using ndown.
I have run WRF 4.4.1 (with dm+sm) successfully for a 9-3-1 km nested domain. I am now trying to use hourly output for the inner 1 km domain together with ndown to downscale a small part of my inner domain to 333 m resolution. I have successfully created met_em files as well as a wrfinput file for the new 333 m domain using real.exe. I renamed the wrfinput file to wrfndi_d02 and checked that the fields look reasonable. I ran ndown successfully and renamed the ending of the wrfinput and wrfbdy files to d01 before I ran srun ./wrf.exe in a batch script on slurm for my 333 m domain only. After some time (a few minutes to a few hours, depending on how many processors I choose) the job stops, and only the first output file is created. I cannot find any error in the rsl.error files.
The original run was with sst_update=1, but I don't know how to create new wrflowinp files with ndown. As I don't need updated SSTs for this new run, I changed to sst_update=0. I assume this is not part of the problem.
My original 9-3-1 km nested run had a time_step = 30 with a parent_time_step_ratio = 1, 3, 3. When running ndown for my 1 km - 333 m domains, I let time_step = 30 remain the same, but adjusted the parent_time_step_ratio to = 9, 3, which I think should be consistent for the 1 km domain in the original run (because 3x3=9).
Do you have any idea on what's wrong?
Attached are the namelist (for ndown and for wrf afterwards) together with the first rsl.error file.
P.S: My domain contains complex terrain. I have used 3 smooth passes with the 1-2-1 smoothing option. In an earlier simulation, I successfully ran wrf for 3.6-1.2-0.4 km for almost the same region without any cfl errors. As the 0.4 km domain is very similar to my new 333 m domain, I expect it is possible to run my new 333 m domain in a stable way using ndown.