Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

High resolution complex orography (e.g. 1 km) problems

frapas70

New member
Hi all
we are encountering big problems in running WRF model at very high resolutions (e.g 1 km) over complex terrain (e.g Alps). The model ends very quickly with a seg-fault due to instability. We have tried to lower the t-step to 5s or even 3s....
The only solution we found to run the model is to smooth the topography several times (20 times....) in GEOGRID.TBL
Thanks for help

Model version is 4.4

Francesco
 
Hi Francesco,
Unfortunately it can be really tough to get the model to run over very complex terrain. Can you attach your namelist.input file and I'll see if there's anything else I can suggest? Thanks!
 
Hello,

I'm posting my question in this thread since I am experiencing a similar problem. First, let me say that I'm fairly new to WRF but I've been consulting with a colleague with a good amount of experience and so far we've not been able to figure out the problem.

My setup:
The grid is 5 km and I'm trying to use thin vertical levels starting at about 50 m up to a height of ~42 km, where the layers have a thickness of ~600 m. The goal is to resolve orographic gravity waves in the Andes (i.e. complex terrain). I'm using the automatic levels configuration. The boundary conditions are from ERA-5 (obtained following this guide: Initializing the WRF model with ERA5 (pressure level)). Following my colleague's suggestion, I'm not using 16pts interpolants to avoid problems with my tight levels (only using wt_average_4pt in METGRID.TBL). I've tried both highres and lowres in geogrid and I have the same problem. I'm attaching the relevant files.

My problem:
WRF crashes after the first time step. However, if I increase the dzbot from 50 to, say, 100, it works. Following a few online discussions, including this old post (Extrapolation over complex terrain and propagation to real.exe), I was able to find this issue which might be what's causing the problem. I've generated a figure to explain what I've observed. I read the first met_* file and the wrfinput and calculated the dz over my domain. Then I analyzed one example where one of the dz values is < 0:

1668645027224.png
The top plot is the DZ from the wrfinput as a function of geopotential height [(phb + ph) ./ g] and the bottom plot is the met_ data from ERA-5 (after running metgrid) showing pressure (field PRES) as a function of geopotential height (field GHT). The red vertical lines show the terrain height obtained from the met_ and wfinput (variable HGT). I'm not sure if this is a problem, but my guess is that there should not be data below the terrain. It almost looks like real is interpolating and using data below the terrain which is causing the negative dz in the top plot.

Please let me know if I've missed anything that you need to better understand my problem. Any help that you can provide will be greatly appreciated!
 

Attachments

  • namelist.input
    3.4 KB · Views: 22
  • namelist.wps
    1.8 KB · Views: 7
  • GRIDS_TBL.zip
    5.9 KB · Views: 2
@demiangomez,
Have you tried putting a coarser outer grid around your 5km grid to see if that makes any difference? If not, try maybe a 15km grid, with a 3:1 ratio and see if it still stops immediately. If so, can you package your rsl.error.* files into a single *.tar file and attach them for us to take a look? Thanks!
 
Hi @kwerner,

I added a 15 km grid around my area and WRF is still crashing. I also tried making the dz ~150 m at the bottom and it runs just a single time step and it crashes. I'm also sending you a few other relevant files in case you need to take a look at them. Thanks in advance!
 

Attachments

  • wrf_run.zip
    107.1 KB · Views: 5
Since these files do not appear to show a backtrace (maybe because of running it in parallel) I paste here the backtrace when running it in serial:

Backtrace for this error:
#0 0x7f8332895ad0 in ???
#1 0x7f8332894c35 in ???
#2 0x7f83320bc51f in ???
at ./signal/../sysdeps/unix/sysv/linux/x86_64/libc_sigaction.c:0
#3 0x7f8332110a7c in __pthread_kill_implementation
at ./nptl/pthread_kill.c:44
#4 0x7f8332110a7c in __pthread_kill_internal
at ./nptl/pthread_kill.c:78
#5 0x7f8332110a7c in __GI___pthread_kill
at ./nptl/pthread_kill.c:89
#6 0x7f83320bc475 in __GI_raise
at ../sysdeps/posix/raise.c:26
#7 0x7f83320a27f2 in __GI_abort
at ./stdlib/abort.c:79
#8 0x7f83321036f5 in __libc_message
at ../sysdeps/posix/libc_fatal.c:155
#9 0x7f833211ad7b in malloc_printerr
at ./malloc/malloc.c:5664
#10 0x7f833211ceef in _int_free
at ./malloc/malloc.c:4588
#11 0x7f833211f4d2 in __GI___libc_free
at ./malloc/malloc.c:3391
#12 0x5578916c5e78 in ???
#13 0x5578916d916e in ???
#14 0x55789104eaef in ???
#15 0x5578905b847b in ???
#16 0x55788fde461e in ???
#17 0x55788fbd376a in ???
#18 0x55788ec1c963 in ???
#19 0x55788eb9e4f7 in ???
#20 0x55788eb9d97e in ???
#21 0x7f83320a3d8f in __libc_start_call_main
at ../sysdeps/nptl/libc_start_call_main.h:58
#22 0x7f83320a3e3f in __libc_start_main_impl
at ../csu/libc-start.c:392
#23 0x55788eb9d9b4 in ???
#24 0xffffffffffffffff in ???
 
Hi,
Thanks for sending those items. I have reached out to a colleague to get their opinion on this. They are confused about why dz only reaches around 600 m, when the setting is 2000 m. They said you should plot "DNW" in the wrfinput file. They believe the problem likely happened in the simulation, and not with the levels, themselves.
 
Hi @kwerner,

Sorry that it took so long to get back to you. I think you might be reading the namelist.input file I sent originally, which, indeed, has a max_dz = 2000. I might have uploaded the wrong file. Please check the namelist.input file inside the zip file from my previous message. You will see there that max_dz = 500. I'm attaching a PS with a plot of DNW for the run in "wrf_run.zip". I'm not sure what I'm seeing here, so please let me know if you need anything else.

Again, thanks for your help.
 

Attachments

  • DNW.ps
    23.6 KB · Views: 3
Hi Demian,
As you are modeling over complex terrain you should set w_damping = 1 and increase the epssm value (try to not go over 0.7).
I usually run models over the Andes up to 1km and those are the best option to check after the time step (I think 1 is too low if you are running a 15km resolution).
Also for WPS, you should use 'highres' for 15km data, the low resolution might be too coarse for what you are doing.
 
Per my colleague,
The DNW has a distinct V shape when we prefer a U shape. This means you have either more levels than you need or your stretching factor is too large. Change these to the minimum value that works to get more optimal DNW shapes.
 
Top