Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

How to resolve cfl errors?

Felipe_SM

New member
Hello,

I have been attempting to run WRF in central Chile, a region characterized by complex topography due to the Andes mountain range. I used three domains of 36, 12, and 4 km with time steps of 6dx, 4dx, and 3dx (216, 144, and 108 seconds, respectively). I am using the ERA5 dataset with 50 vertical layers. In all cases, upon reviewing the rsl.error and rsl.out files, I encounter messages like the following:

“rsl.error.0007

2000-06-13_21:23:24 MAX AT i,j,k: 97 62 17 vert_cfl,w,d(eta)= 2.01002645 -2.92913198 4.20067906E-02”
“rsl.error.0007

2000-06-13_21:25:12 1 points exceeded cfl=2 in domain d01 at time 2000-06-13_21:25:12 hours”

I have attached the following summary table:

AD_4nXeFDs2d7Ib1l0H9yJiCuYWMTVI7XYjWcrzV_rpf5ePeQG95jJlQRhnWygEC4ThJ99_jskAMl7-v6UYTQ8qgkoXCO_rHggrhJk5mSB-IaiyTKjZeId6I4zJo-hzKjaepxCW-u5uEROnlaflssXzEkUWsuMY


Where dt is the time step, #CFL2(error) is the number of times when the CFL message appeared, CFL Maximo (error) is the max value of CFL in all rsl.error* files and #CFL>2: dom i, is the amount of time the CFL message appeared in domain i.
It is worth noting that all my simulations successfully complete the WRF execution, and precipitation results look reasonable, I only find these messages when analyzing the WRF outputs. In my search for solutions, I found various recommendations on the forum, such as:

  • w_damping = 1: This option activates vertical velocity damping to reduce numerical noise in the vertical wind component.
  • diff_6th_opt = 2: This setting applies sixth-order numerical diffusion to improve model stability and accuracy.
  • damp_opt = 2: This parameter activates Rayleigh damping to help control gravity wave reflections at the model top boundary.
  • epssm = 0.9: This value controls the sound waves that propagate vertically. If the model domain is over a complex terrain area with large topography gradients, it is recommended that this value be increased.
I tried using these previous options separately and in combination, but the results were similar in all cases. When I reduced the time step, I observed significantly fewer CFL warnings, and their values decreased considerably.

How can I solve this problem? What would you recommend to avoid reducing the time step and prevent this instability issue?
 
When CFL violation occurs, we usually try to overcome the problem by reducing time step, turning on w_damping and increasing the value of epssm, which are exactly what you have done. These approaches can effectively suppress CFL violation, if not completely eliminate it.
I would say that if your case can run successfully, you don't need to worry for the CFL errors. The CFL violation messages in RSL files only tell that the model may be temporarily unstable in its numerical integration. However, the model can manage to overcome the instability and continue to run successfully. In this case, we can get reasonable simulations.
 
  • epssm = 0.9: This value controls the sound waves that propagate vertically. If the model domain is over a complex terrain area with large topography gradients, it is recommended that this value be increased.
Hello,

Note that epssm "should not be set higher than 0.5" as told by Kelly Werner HERE.

About decreasing time-step, I've also been advised to be careful : currently running sensitivity tests, it might affect the way I compare physics options on my outputs. This is worth noting.
 
When CFL violation occurs, we usually try to overcome the problem by reducing time step, turning on w_damping and increasing the value of epssm, which are exactly what you have done. These approaches can effectively suppress CFL violation, if not completely eliminate it.
I would say that if your case can run successfully, you don't need to worry for the CFL errors. The CFL violation messages in RSL files only tell that the model may be temporarily unstable in its numerical integration. However, the model can manage to overcome the instability and continue to run successfully. In this case, we can get reasonable simulations.
Hello,
I have some follow up questions,
1. Which among these is a better approach to overcome the CFL error?
2. Lets say I am running the model at convection-permitting resolution, will setting w_damping = 1 have any negative implication on the updraft? Will it weaken the updraft of deep convective systems? So in this case will reducing the time step be a better option?
 
Hello,
I have some follow up questions,
1. Which among these is a better approach to overcome the CFL error?
2. Lets say I am running the model at convection-permitting resolution, will setting w_damping = 1 have any negative implication on the updraft? Will it weaken the updraft of deep convective systems? So in this case will reducing the time step be a better option?

Hello,

Here are my thoughts based on your questions:
  1. On Overcoming CFL Errors:
When w_damping = 1, the damping is applied only when and where it’s needed to keep the model from crashing (see Skamarock et al., 2021; Section 4.5.1). Another way to look at it: when you get a vertical CFL error, the model is already telling you it cannot properly integrate the updraft—so in that sense, applying damping isn’t necessarily less "physically relevant." And practically speaking, if the model crashes, you don’t have a simulation at all.

From my own experience with ~2.3 km resolution simulations over steep terrain using convection-permitting setups, there have been cases where w_damping wasn’t enough, and I also needed to reduce the time step to avoid crashes.
  1. On Updrafts and Model Stability:
I don’t have specific insight into how much w_damping might weaken strong updrafts in deep convective systems. That’s hard to assess without dedicated analysis. However, if the model is still unstable or frequently applying damping, that may suggest issues with time step or vertical resolution. In those cases, I’ve found that reducing the time step has been helpful, though it does increase computational cost.

One approach that worked well for me is reducing vertical resolution. I compared runs at 32 and 64 vertical levels and saw very little difference in overall output, while the 32-level runs were much more stable—no crashes—compared to 64 levels, which were prone to vertical CFL errors.
  1. On Changing Time Step Between Runs:
It’s worth noting that changing the time step between runs can slightly affect the physics integration. In my tests (comparing two 1-year runs at 30s and 40s), average outputs weren’t dramatically different, but there was a noticeable difference in the “noise”—small-scale variability introduced by the solver.

In summary, based on my own experience, the order I’d follow would be:
  • First: lower vertical resolution if possible,
  • Then: activate w_damping = 1,
  • Finally: reduce time step if needed.
Hope this helps.
 
Hello,

Note that epssm "should not be set higher than 0.5" as told by Kelly Werner HERE.

About decreasing time-step, I've also been advised to be careful : currently running sensitivity tests, it might affect the way I compare physics options on my outputs. This is worth noting.
interesting is i also have been told in another post that epssm can go up to .9 in the himalyas
 
Top