Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

Pressure at layer n < pressure at layer n+1

This post was from a previous version of the WRF&MPAS-A Support Forum. New replies have been disabled and if you have follow up questions related to this post, then please start a new thread from the forum home page.

thomasuh

New member
Hello,

when trying to increase the number of levels beyond 80 @ 3km resolution, MPAS stops with after the second time step at 5 grid cells with the following message:

ERROR: --- subroutine MPAS_to_phys - pressure(1) < pressure(2):
ERROR: i =3741
ERROR: latCell=35.2428
ERROR: lonCell=74.0396
ERROR: 1 3741 1 36.2817 61700.1 4460.48 66160.5 0.827521 311.995 277.262 0.783189E-02
ERROR: 1 3741 2 42.2463 61369.9 4834.08 66203.9 0.812786 317.776 282.452 0.795462E-02
ERROR: 1 3741 3 48.3066 60991.3 4856.04 65847.3 0.808134 318.383 282.556 0.791030E-02

This is a grid cell with steep terrain gradients. I am using the default values for setting up the vertical structure:

--- config_tc_vertical_grid = T
--- als = 0.750000000000000E-01
--- alt = 1.70000000000000
--- zetal = 0.750000000000000

Do you have any ideas where this error may come from?
 
We don't often test with more than 55 vertical layers, but with sufficient care I don't think this should intrinsically be a problem. There are a couple of tests that I think might be worth trying, and that might help us to get a better idea of where the fundamental issue lies:

(1) Could you try turning off all physics parameterizations, and also setting
Code:
&printout
    config_print_detailed_minmax_vel = true
/
?

(2) Without physics parameterizations, and depending on how far the model integrates without them, could you try cutting the model time step down to, say, six seconds?

It may be that imbalances in the initial conditions that result from our interpolation of the first-guess fields are leading to model instabilities. If either (1) or (2) allow the model to integrate for a short time, perhaps five minutes or so, it might be interesting to try writing a model restart file five minutes into the original simulation, then increasing the model timestep and turning on physics before restarting the model.
 
Hello Michael,

I tried switching off all physics scheme and the model runs without problems using 90 levels. I tried your suggestion with writing a restart file after 10 min and then restarting the model.

When the GWDO is switched on, I get:

ERROR: *******************************************************************************
ERROR: The GWDO scheme requires valid var2d, con, oa{1,2,3,4}, and ol{1,2,3,4} fields,
ERROR: but these fields appear to be zero everywhere in the model input.
ERROR: Either set config_gwdo_scheme = 'off' in the &physics namelist, or generate
ERROR: the GWDO static fields with the init_atmosphere core.
ERROR: *******************************************************************************

I checked the fields in the restart file and they are all non-zero.

Switching GWDO off, the model stops right after

" --- end initialize NOAH LSM tables"

with a seg-fault.
 
I got the restart working, forgot to add "input;output" to the restart stream.
Nevertheless, the model crashes after one time step when restarting from a 10 or 20 min simulation carried out without any physics schemes.
 
I am still having trouble to get the restart running (10km mesh, 80 levels, dt = 30s, single precision). The model stops at the first time step after restart (no matter whether restarting after 5min, 20 min or 1h) with
Code:
CRITICAL ERROR: NaN detected in 'w' field.

After restart, the detailed output for winds before the Seg-fault is
Code:
 global min w: -6.55331 k=15, -65.1369 lat, -62.6006 lon
  global max w: 6.68614 k=37, 22.1441 lat, 91.3282 lon
  global min u: -150.985 k=75, -69.6021 lat, 62.0948 lon
  global max u: 150.974 k=75, -69.6851 lat, 62.0367 lon
  global max wsp: 156.753 k=74, -78.8374 lat, 117.451 lon

while the normal simulation shows
Code:
 global min w: -6.55383 k=15, -65.1369 lat, -62.6006 lon
  global max w: 6.68614 k=37, 22.1441 lat, 91.3282 lon
  global min u: -150.985 k=75, -69.6021 lat, 62.0948 lon
  global max u: 150.974 k=75, -69.6851 lat, 62.0367 lon
  global max wsp: 156.753 k=74, -78.8374 lat, 117.451 lon
  global min, max scalar 1 0.00000 0.279821E-01
  global min, max scalar 2 0.00000 0.163256E-02
  global min, max scalar 3 0.00000 0.415551E-02
  global min, max scalar 4 0.00000 0.253122E-03
  global min, max scalar 5 0.00000 0.956948E-02
  global min, max scalar 6 0.00000 0.274236E-02
  global min, max scalar 7 0.00000 0.520138E+08
  global min, max scalar 8 0.00000 8495.66

The global min value of "w" is slightly different in the restart run...

The simulation starting at 00 UTC runs for at least a couple of hours without any problems (I did not test more due to computing resources).
Turning off the YSU PBL scheme, the Thompson scheme, or the GWDO scheme still shows the NaN values in the "w" field after restart.

I also tried to perform a short simulation without physics and then activating all physics again - same error.

I have a feeling, that a variable may not be correctly initilized or read from the restart file... Maybe a precision problem?
 
When running with double precision, the restart works as expected (with a different number of nodes due to memory). Also in case starting from a DP restart file with single precision works.
 
Top