Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

Nesting boundary over mountainous terrain

This post was from a previous version of the WRF&MPAS-A Support Forum. New replies have been disabled and if you have follow up questions related to this post, then please start a new thread from the forum home page.

wallis

Member
Hi,
I would like to do a high resolution (2-3km) forecast for the Sierra's, nesting down from GFS->~9km->~3km. Because the Sierra's are so large it is not possible for me to place the inner domain border entirely away from mountainous terrain, it has to cross somewhere, and it's always on this border I am getting crashes.

Is there some way to smooth the area near the nest border more without smoothing the whole inner domain's terrain? The area of interest is a long way from the border so I think it should be out of the area of influence. Is there an option like smooth_cg_topo for nest borders?

Otherwise, can I go direct to 2-3 km from GFS?
 
Hi,
Even using GFS 0.25 degree data with a 3 km grid resolution would be about a 9:1 ratio, which is a pretty big jump, but you can certainly test it out to see if you get reasonable results. I would recommend just running for a few hours, up to a day, to see if the results are as you would expect.

If the above solution is not adequate, I'm not aware of a smoothing option for the nest. When you say the model is crashing, are you getting CFL errors? If so, what have you tried to correct it? Can you attach your namelist.input file and let me know which version of WRF you are running?
 
I get errors like this, always L=1, usually but not always I,J on the boundary between
WARN5: NaN temperature; I,J,L,P= 465 95 1
59.02021
WARN1: Water saturation T<180K; I,J,L,TC,P= 463 40 2
-93.15373 76.89545
WARN5: NaN temperature; I,J,L,P= 465 38 1
59.04354

Followed by an immediate crash of the model. WRF 4.2.2.

It always seems to happen at the edge of the blending zone, ~5-10 points from the boundary, in this case on the boundary of the parent domain, but more often in the nest boundary
parent_id = 1, 1
parent_grid_ratio = 1, 3
i_parent_start = 1, 94
j_parent_start = 1, 214
e_we = 475, 790
e_sn = 560, 904
dx = 9900,3300
dy = 9900,3300
eta_levels = 1.000, 0.998, 0.994, 0.987, 0.980, 0.971, 0.961, 0.951, 0.941, 0.930, 0.920, 0.909, 0.899, 0.888, 0.877, 0.866, 0.855, 0.844, 0.832, 0.820, 0.809, 0.797, 0.784, 0.772, 0.760, 0.747, 0.734, 0.721, 0.707, 0.693, 0.679, 0.665, 0.650, 0.635, 0.620, 0.604, 0.588, 0.572, 0.555, 0.537, 0.519, 0.500, 0.480, 0.460, 0.439, 0.416, 0.392, 0.367, 0.340, 0.310, 0.288, 0.269, 0.250, 0.231, 0.212, 0.192, 0.173, 0.154, 0.135, 0.115, 0.096, 0.077, 0.058, 0.038, 0.019, 0.000,

Are my upper level eta_levels too close together perhaps?
 
A few tests you could do:
1) Try to just use the model levels that are provided by WRF. I.e., don't set eta_levels. Just let the model choose for you. You'll need to re-run real.exe for this test. If the test runs okay then you know it's your specific eta levels causing the problem.
2) Are you running multiple different tests, with different dates? I'm asking because you said you "always" get these errors. If it's the same test dates, verify that the data are okay.
3) What happens if you just run a single domain? I know this is not the domain you're interested in, but I'd like to see what the impact of your nested domain has on this error.

In your response will you please also attach your namelist.input, along with the rsl files (packaged together in a single rsl.tar file)? Thanks!
 
All of my errors are like this (~8-12 points from the boundary), but not all of my runs are errors. About 20% of my runs on real data are failing. I thought it was always over mountainous terrain, but here it's actually quite a few gridpoints away, and happening in the outer domain, but still right on the border.

I recompiled with CFL printouts enabled to show the problem points:
d01 2021-04-26_18:47:00 5 points exceeded W_CRITICAL_CFL in domain d01 at time 2021-04-26_18:47:00 hours
d01 2021-04-26_18:47:00 Max W: 466 74 31 W: 10.44 W-CFL: 1.23 dETA: 0.01
d01 2021-04-26_18:47:00 Max U/V: 466 74 31 U: -18.52 U-CFL: -0.04 V: 7.74 V-CFL: 0.02
d01 2021-04-26_18:47:00 14 points exceeded W_CRITICAL_CFL in domain d01 at time 2021-04-26_18:47:00 hours
d01 2021-04-26_18:47:00 Max W: 467 78 24 W: 8.90 W-CFL: 1.35 dETA: 0.01
d01 2021-04-26_18:47:00 Max U/V: 467 78 24 U: -17.20 U-CFL: -0.03 V: 5.29 V-CFL: 0.01
d01 2021-04-26_18:47:00 29 points exceeded W_CRITICAL_CFL in domain d01 at time 2021-04-26_18:47:00 hours
d01 2021-04-26_18:47:00 Max W: 469 80 31 W: 11.52 W-CFL: 1.40 dETA: 0.01
d01 2021-04-26_18:47:00 Max U/V: 469 80 31 U: -16.53 U-CFL: -0.03 V: 6.86 V-CFL: 0.01
d01 2021-04-26_18:47:00 24 points exceeded W_CRITICAL_CFL in domain d01 at time 2021-04-26_18:47:00 hours
d01 2021-04-26_18:47:00 Max W: 469 81 31 W: 12.96 W-CFL: 1.59 dETA: 0.01
d01 2021-04-26_18:47:00 Max U/V: 469 81 31 U: -23.57 U-CFL: -0.05 V: 3.54 V-CFL: 0.01


I uploaded the full logs and namelist to err.tgz on the nextcloud.
 
**UPDATED**

Thanks for sending those. If you're getting CFL errors, then we know the model is becoming unstable, likely due to complex terrain. I see you have epssm set to 0.15. You could try to go higher with that value (up to 0.5). I would try that first.

If that doesn't work, I see you're already using a lower time_step, but unfortunately you may have to try to go even lower. For this particular run, the CFLs are on domain 01, so as a trial, you could run just a single domain to see to what value you'd need to set the time_step to get past this error, then try with both domains to make sure that value works. You could also try not setting eta_levels, as I previously suggested, to see if that makes a difference.

I'm curious - you said you recompiled with CFL printouts enabled. These should just automatically print out with a standard compile. What did you need to do differently to get these to print out? Did none print out with a standard compile?
 
Those points geographically are over flat land - isn't it suspicious that they're consistently occurring near the domain edge?
I will try epssm - are there side-effects to this?
Would adjusting spec_exp or relax_zone achieve anything?
 
See attached some ncview charts from this region prior to the crash.
 

Attachments

  • hgt.png
    hgt.png
    60.2 KB · Views: 1,859
  • v.png
    v.png
    40.2 KB · Views: 1,858
  • w.png
    w.png
    23.5 KB · Views: 1,856
  • u.png
    u.png
    42.2 KB · Views: 1,856
  • rainnc.png
    rainnc.png
    12.1 KB · Views: 1,858
Yeah, the fact that they are near the boundary seems odd. To answer your question regarding epssm, the option is used to slightly forward the centering of the vertical pressure gradient (or sound waves) in an effort to damp 3-d divergence. Altering it anymore than ‘slightly’ can begin to provide inaccuracy, which is why we don't recommend going beyond 0.5.

In addition to trying that, please try the other tests I suggested. Even if it's not your intent to run with only a single domain, or to turn off eta_levels, if those run okay, it can get us closer to solving the problem.
 
Sorry for the delay in performing those tests.

Results:
Changing to d01 still crashed in the same place, subsequent tests have been run with this as it's much quicker.
epssm to 0.5 still crashed
Increasing spec_exp/relax_zone made it crash faster
Changing to real-defined eta_levels still crashed in approximately the same place
Reducing the minimum timestep to 12 succeeds, with points exceeding W_CRITICAL_CFL every timestep.

It seems to me that the timestep is a bandaid fix, that these border discontinuities are being caused by something else. How can I debug this further?
 
I did find that disabling DFI and going back to the default advection order reduced the CFL, allowing it to finish with a slightly higher timestep, but the discontinuity is clearly still there in the charts just the same, see W and T2
Any ideas?
 

Attachments

  • ex.png
    ex.png
    8.4 KB · Views: 1,808
  • ex2.png
    ex2.png
    29.9 KB · Views: 1,810
Hi,
I've run this by a colleague to get more input.

1) We notice that your namelist includes the parameter 'zadvect_implicit,' which is a new option that has just been released yesterday in the V4.3 code. Although the option is set to 0 (at least in the version of the namelist I have for you), you must have some knowledge of this option, and therefore we are wondering if you are using WRF code that includes modifications for the IEVA option. If so, we know that early versions of the modifications caused some instability and we suggest that you try using our newly-released WRF code.

2) Your lowest level is really low and can therefore limit the time step used over complex terrain.

3) Discontinuities can occur along lateral boundaries due to different model physics - it may be worth trying a few different options, if you haven't already.

4) If none of the above are helpful in finding a solution, we would be interested in seeing a plot of your terrain field (HGT) for domain 1.
 
Top