Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

CFL error when running last step of ndown case

This post was from a previous version of the WRF&MPAS-A Support Forum. New replies have been disabled and if you have follow up questions related to this post, then please start a new thread from the forum home page.


I have been trying to run a ndown case but when I execute wrf.exe with only domain 2, a cfl error stops the model after 40 seconds.
I don't know what is happening as I tried to run the same model configuration with two-way nesting and it worked. I can't find any inconsistencies or incompatibilities but maybe I'm missing something.
I attached the last namelist that has the configuration to run wrf.exe (namelist.input) and the prior one prepared to run prepared to run ndown.exe (namelist3.input, it's the same one I used to run real.exe for both domains but it has the variable io_form_auxinput2=2 when ndown.exe has to be run).
Thanks in advanced,


  • namelist.input
    4.7 KB · Views: 45
  • namelist3.input
    4.9 KB · Views: 48
Hi Ming Chen,
I also tried turning on PBL scheme (in fact I tried copying d01's physics but it crashed anyway). I didn't want to turn it on at the begining because I treated d02 as LES.
Do you know what else could be going wrong?
We run LES only in high-resolution case, for example the grid interval should be smaller than around 100 meters. For your case with dx=1km, I don't think LES is an appropriate option.

CFL violation is caused by numerical instability. w_damping = 1 might be helpful. But if the problem is caused by unreasonable physics, then I am not sure whether it can solve the problem.
I tried to activate PBL scheme for d02, and turn off cumulus parameterization in this finer domain as for < 3km scales I found that it wasn't recommended. I attach again the namelist I used to run ndown.exe, I didn't change anything from d02's physics to run wrf.exe afterwards, but the model still crushes.
Did I get rid of model unreasonable physics? Why is it not working anyway?


  • namelist3.input
    5 KB · Views: 39
Is there ay special reason that you set "periodic_x = .true."? I suppose this is a real-data case.Please let me know if I am wrong.

For real-data case, please set the following options:

spec_bdy_width = 5,
spec_zone = 1,
relax_zone = 4,
specified = .true., .false.,.false.,
nested = .false., .true., .true.,

You also need to set
radt = 5, 1,

Please try again with the above options and let me know whether the case can work. Thanks.
I changed the options you mentioned from the &bdy_control section and it worked! I just tried with a simple test but it run for an hour without a problem.
Could you give me a short explanation of what do this options do? I had never changed them since I don't know them very well and never caused me problems.
The 'specified' boundary condition is designed for real-data case. It updates the lateral boundary vary condition so that the large-scale forcing can enter the WRF domain during the integration.
Your previous settings of 'periodic' is mainly for ideal cases and shouldn't be applied for read-data case.
Hello again,
I used your advice to run ndown and when changing bdy_control options in the last namelists it worked. For consistency, I wanted to change all bdy_control options (when running only d01, nesting, ndown and so on) but I got an error when running the last step again. Checking the forum I found that some people made it work by changing use_theta_m from 1 to 0, so I did it too and ndown finished successfully.
Anyway, when checking the results I found something strange. I attach a picture to explain it. Blue line is a simulation run with the same physical conditions, using adaptive time step and use_theta_m=1, and black linke is the result of the ndown run, also with adaptive time step. The graph shows the hourly precipitation at a given point, why are the results so different?
Does the theta option affect this or is it also related to the use of ndown (the way the two domains communicate)? I was expecting both runs to give the same results but since I had to change use_theta_m to get ndown to run something is wrong...


  • ndown.PNG
    61.7 KB · Views: 880
Would you please clarify your question with more details?

(1) The blue line is result from a single domain run driven by large scale forcing data. is this right? If so, what is your forcing data, grid intervals, physics options?

(2) The black line is result of ndown, indicating that you run this case using coarse-domain wrfout as the forcing data, is this right? Again, what is the resolution, and physics options?

Also, please let me know which version of WRF you are running.
Hello Ming Chen, thanks for your answer.
I attach last step's namelist of ndown so you can see physics options and resolution. Both d01 and d02 have the same physics and d01 has a resolution of 5x5km while d02 is a 1x1km grid.
Blue line is the result of a two-way nesting run with the same physics as the ndown case (black line) with the only difference that ndown has use_theta_m=0 and the blue line has it activated. The results are from the d02, and both grids (d01 and d02) have the same resolution and physics as the ndown case.
In both cases GFS input files are used and I'm running v4.3 of WRF. Let me know if there is anything else you would like to know.


  • namelist.input
    5 KB · Views: 34
When looking at results in the fine-resolution domain, two-way nesting and ndown could yield very different results. This is because, with two-way nesting, the simulated results in the fine-resolution domain will feedback to the coarse-resolution domain, and the forcing for the fine-resolution domain is updated every time step. For ndown, however, there is no feedback between the two domains and the forcing is updated at the frequency of wrfout output interval.
Wee usually recommend running WWRF in two-way nesting and look at the results over the fine-resolution domain.