Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

Unable to successfully modify zap_close_levels

This post was from a previous version of the WRF&MPAS-A Support Forum. New replies have been disabled and if you have follow up questions related to this post, then please start a new thread from the forum home page.

peter_

Member
Hi:
I am making a WRF4.2 simulation up to 1 hPa and I am trying to modify the zap_close_levels default value to 0.1 in order to avoid the skipping of pressure levels near the top. My forcing data are FNL 1deg x 1deg and I use 100 processors. When I tried to modify the value to 0.1 or 1 WRF aborted at startup. When I set the value to 10 the failure occured within the first step. In brief, is there a way to successfully lower the zap_close_levels value ? I am including below my namelist, the tail of the slurm file and the tail of the corresponding error file. In this case I used zap_close_levels = 10.

Code:
&time_control
run_days                 = 0,
run_hours                = 36,
run_minutes              = 0,
run_seconds              = 0,
start_year               = 2019,     2019,
start_month              = 9,        9,
start_day                = 11,       11,
start_hour               = 0,        0,
start_minute             = 00,       00,
start_second             = 00,       00,
end_year                 = 2019,     2019,
end_month                = 9,        9,
end_day                  = 12,       12,
end_hour                 = 12,       12,
end_minute               = 00,       00,
end_second               = 00,       00,
interval_seconds         = 21600,
input_from_file          = .true.,   .true.,
history_interval         = 180,       6,
history_outname          = "/scratch/peter/wrfout_d<domain>_<date>"
frames_per_outfile       = 1000,     110,
restart                  = .false.,
restart_interval         = 5000,
io_form_history          = 2,
io_form_restart          = 2,
io_form_input            = 2,
io_form_boundary         = 2,
debug_level              = 1000,
/

&domains
time_step                = 40,
time_step_fract_num      = 0,
time_step_fract_den      = 1,
max_dom                  = 2,
e_we                     = 364,      697,
e_sn                     = 382,      574,
e_vert                   = 75,       75,
p_top_requested          = 100,
num_metgrid_levels       = 34,
num_metgrid_soil_levels  = 4,
dx                       = 9000,     3000,
dy                       = 9000,     3000,
grid_id                  = 1,        2,
parent_id                = 1,        1,
i_parent_start           = 1,       46,
j_parent_start           = 1,       82,
parent_grid_ratio        = 1,        3,
parent_time_step_ratio   = 1,        3,
feedback                 = 1,
smooth_option            = 0,
max_dz                   = 750.,
auto_levels_opt          = 2,
zap_close_levels         = 10,
smooth_cg_topo           = .true.,
/

&physics
mp_physics               = 3,        3,
ra_lw_physics            = 1,        1,
ra_sw_physics            = 1,        1,
radt                     = 30,       30,
sf_sfclay_physics        = 1,        1,
sf_surface_physics       = 2,        2,
bl_pbl_physics           = 1,        1,
bldt                     = 0,        0,
cu_physics               = 0,        0,
cudt                     = 5,        5,
isfflx                   = 1,
ifsnow                   = 0,
icloud                   = 1,
surface_input_source     = 1,
num_soil_layers          = 4,
sf_urban_physics         = 0,        0,
maxiens                  = 1,
maxens                   = 3,
maxens2                  = 3,
maxens3                  = 16,
ensdim                   = 144,
/

&fdda
/

&dynamics
w_damping                = 0,
diff_opt                 = 1,
km_opt                   = 4,
diff_6th_opt             = 0,        0,
diff_6th_factor          = 0.12,     0.12,
base_temp                = 290.,
damp_opt                 = 3,
zdamp                    = 10000.,   10000.,
dampcoef                 = 0.2,      0.2,
khdif                    = 0,        0,
kvdif                    = 0,        0,
non_hydrostatic          = .true.,   .true.,
moist_adv_opt            = 1,        1,
scalar_adv_opt           = 1,        1,
/

&bdy_control
spec_bdy_width           = 5,
spec_zone                = 1,
relax_zone               = 4,
specified                = .true.,  .false.,
nested                   = .false.,   .true.,
/

&grib2
/

&namelist_quilt
nio_tasks_per_group      = 0,
nio_groups               = 1,
/

Code:
starting wrf task           93  of          100
 starting wrf task           61  of          100
 starting wrf task           62  of          100
 starting wrf task           63  of          100
srun: error: compute-4: task 92: Segmentation fault

Code:
DEBUG domain_clockadvance():  after WRFU_ClockAdvance,  clock stop time = 2019-09-12_12:00:00
 DEBUG domain_clockadvance():  after WRFU_ClockAdvance,  clock time step = 0000000000_000:000:013
d02 2019-09-11_00:00:13+01/03 module_integrate: back from solve interface
d02 2019-09-11_00:00:13+01/03 in med_latbound_in
d02 2019-09-11_00:00:13+01/03 module_integrate: calling solve interface
d02 2019-09-11_00:00:13+01/03  grid spacing, dt, time_step_sound=   3000.00000       13.3333330               4
d02 2019-09-11_00:00:13+01/03 calling inc/HALO_EM_MOIST_OLD_E_7_inline.inc
d02 2019-09-11_00:00:13+01/03 calling inc/PERIOD_BDY_EM_MOIST_OLD_inline.inc
d02 2019-09-11_00:00:13+01/03  call rk_step_prep
d02 2019-09-11_00:00:13+01/03 calling inc/HALO_EM_A_inline.inc
d02 2019-09-11_00:00:13+01/03 calling inc/PERIOD_BDY_EM_A_inline.inc
d02 2019-09-11_00:00:13+01/03  call rk_phys_bc_dry_1
d02 2019-09-11_00:00:13+01/03  call init_zero_tendency
d02 2019-09-11_00:00:13+01/03 calling inc/HALO_EM_PHYS_A_inline.inc
d02 2019-09-11_00:00:13+01/03  call phy_prep
d02 2019-09-11_00:00:13+01/03  DEBUG wrf_timetoa():  returning with str = [2019-09-11_00:00:13]
d02 2019-09-11_00:00:13+01/03  call radiation_driver
d02 2019-09-11_00:00:13+01/03 Top of Radiation Driver
d02 2019-09-11_00:00:13+01/03 calling inc/HALO_PWP_inline.inc
d02 2019-09-11_00:00:13+01/03  call surface_driver
d02 2019-09-11_00:00:13+01/03 in SFCLAY

Program received signal SIGSEGV: Segmentation fault - invalid memory reference.

Backtrace for this error:
#0  0x2aaaabc5333f in ???
#1  0x271f2dd in ???
#2  0x2722a46 in ???
#3  0x2727c52 in ???
#4  0x1f0c1ba in ???
#5  0x17bab87 in ???
#6  0x12f9901 in ???
#7  0x11b7ae4 in ???
#8  0x4735fa in ???
#9  0x473bda in ???
#10  0x406213 in ???
#11  0x405bcc in ???
#12  0x2aaaabc3f494 in ???
#13  0x405c03 in ???
#14  0xffffffffffffffff in ???
 
Let's look at two approaches: what you are using now, and maybe what could be missing.

What you are using now

The "zap_close_levels" is a real.exe feature. After the real.exe program runs, look at a column of data in the wrfinput_d01 file for a grid point that is over the ocean (we want as little elevation influence as possible).

1. Get the columns of data (for me, location "(1,1," is my ocean point).
Code:
ncdump -v PB -f f -b f wrfinput_d01 | grep "(1,1," | cut -d"," -f 1 | awk '{print $1}' > pb.txt
ncdump -v P -f f -b f wrfinput_d01 | grep "(1,1," | cut -d"," -f 1 | awk '{print $1}' > p.txt
ncdump -v ZNU -f f -b f wrfinput_d01 | grep "(" | cut -d"," -f 1 | awk '{print $1}' > znu.txt

The last file (znu.txt) requires some editing so that we just have the eta levels. Put the columns of data into your favorite "goto" plotting package (matlab, python, excel, gnuplot). We just want simple line plots.

2. Generate three figures:
a. PB+P (total pressure at each location in the column)
Screen Shot 2020-12-11 at 1.15.38 PM.png

b. delta(P) (looks like I missed my label on this one below)
Screen Shot 2020-12-11 at 1.26.15 PM.png

c. delta(P)/delta(eta)
Screen Shot 2020-12-11 at 1.15.49 PM.png

Do your figures look similarly smooth / semi-well behaved?

Maybe what could be missing

For higher model lids, we have some additional namelist options for the differing temperature profiles in the stratosphere and above. Look in the Registry/Registry.EM_COMMON file for the namelist options: iso_temp, base_pres_strat, base_lapse_strat. You may want to consider disabling the full extent of the existing iso_temp option (it is 200 K, set it to a physically reasonable lower bound that you expect), and then activate the base_pres_strat (set this to about 5500 Pa). This warms the temps back when the model grid cell is physically above the selected pressure value. You can leave the base_lapse_strat alone (it has a reasonable lapse rate for a standard atmosphere, and up that high, that is likely as good as anything, though you could use the first guess data to give you some further info).
 
Hi Dave:
Thank you very much for your reply. First of all I already tested
iso_temp = 200.,
base_pres_strat = 1300,
base_lapse_strat = -40.,
in my namelist according to my atmospheric conditions, but it did not avoid my WRF crash. Please find below my graphs. For the second and third plots I hope that I have made the adequate interpretation of your request, as I used total pressure (PB+P) instead of P. If I am wrong then let me know and I will include the alternative plots. My first and second plots are similar to yours. However, my third figure differs clearly from yours as from my level 20. I would very much appreciate your opinion. Should I modify the configuration of the eta levels or what should be changed ?
DaveGill.jpg
 
How comfortable are you with Fortran? Years ago I put in code to provide the vertical interpolation scheme in the real program with a stand-alone testing capability. I have been tinkering with it today to get it back into shape. It is almost ready to go. The idea would be for you to input a column of data into this utility, and look at the output.

Is this worthwhile for us to pursue, or do we try another tact?

Also, just for grins, in the same namelist record where you set the zap_close_levels value, set the vertical interpolation to linear. Still keep the zap_close_levels really small, maybe 0.1.
Code:
&domains
 lagrange_order = 1
/

Next step is going to be to have you send the data metgrid to us (or tell us how to access the data from a ftp or web site). This is an initial condition problem, and these are much easier to fix than model troubles.
 
Hi Dave:
I think that I found the problem but I do not know how to solve it. I had a look at the met_em.* files in the variable GHT. In mountain areas there is a big problem with the lowest two levels, as the second one stays often well below the first one. I was once told with ERA5 forcing data that I should only use 4-point interpolators for GHT in metgrid.exe and it worked. However, this procedure fails with NCEP FNL analysis. Please have a look at the plot of the difference between level #2 - level #1, specially in the areas with significant topographic slopesLowest2Levels.jpg
 
Well, apparently real.exe somehow fixes the inconsistent lower levels in met_em.* as at least in wrfinput the pressure/potential height show a decreasing/increasing behaviour for increasing lower levels. Moreover, wrf.exe runs with success (zap_close_levels=0.1, lagrange_order=1). However, I wonder if some subtle problems are transferred from met_em.* to wrfinput or wrfbdy as several vert_cfl warnings appear around levels 63 or 64 (out from 75 levels) during the first 7 to 18 hours (vertical velocities about 20 m/s). Or it could just be a spin up issue. The strange fact is that the warnings only appear for d01, not for d02 and over a few fixed coordinates located within complex topography. When I use no upper damping the problem extends from about level 64 to 70. At lower levels vertical velocity is also above standard values but it is not large enough to generate warnings. As I would like to study gravity waves I cannot use w_damping=1. What other variables should I verify in wrfinput or wrfbdy to ensure that these files are adequate ? I would appreciate any suggestion in order to obtain reasonable vertical velocities in the simulation.
 
Top