Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

WRF crushes with ACM2 PBL scheme

This post was from a previous version of the WRF&MPAS-A Support Forum. New replies have been disabled and if you have follow up questions related to this post, then please start a new thread from the forum home page.

davidwrf

Member
Hi,

I'm using V 4.2.2.
Trying for the first time to use the ACM2 PBL scheme and the simulation crushes within the first hour or so.
It ran fine with YSU and BouLac PBL scheme.
When increasing the value of debug (even to 500 or more), I cannot identify any complain/explanation for the crush.

The physics section of namelist.input reads:
**********************************************************
**************************************************************
&physics
mp_physics = 6, 6, 6,
ra_lw_physics = 1, 1, 1,
ra_sw_physics = 1, 1, 1,
radt = 4.5, 1.5, 0.5,
sf_sfclay_physics = 1, 1, 1,
sf_surface_physics = 2, 2, 2,
bl_pbl_physics = 7, 7, 7,
bldt = 0, 0, 0,
cu_physics = 0, 0, 0,
cudt = 0, 0, 0,
isfflx = 1,
icloud = 1,
surface_input_source = 1,
num_soil_layers = 4,
sf_urban_physics = 0, 0, 0,
num_land_cat = 20,
usemonalb = .false.,
rdlai2d = .false.,
slope_rad = 0, 1, 1,
topo_shading = 0, 1, 1,
shadlen = 25000
******************************************************************
*****************************************************************

The several last lines of the debug text read as:

**********************************************************************
*********************************************************************

module_integrate: calling solve interface
grid spacing, dt, time_step_sound= 500.000000 3.00000000 4
call rk_step_prep
calling inc/HALO_EM_A_inline.inc
calling inc/PERIOD_BDY_EM_A_inline.inc
call rk_phys_bc_dry_1
call init_zero_tendency
calling inc/HALO_EM_PHYS_A_inline.inc
call phy_prep
DEBUG wrf_timetoa(): returning with str = [2017-09-14_18:02:30]
calling inc/HALO_TOPOSHAD_inline.inc
call radiation_driver
Top of Radiation Driver
CALL cldfra1
CALL cldfra1
CALL cldfra1
CALL cldfra1
CALL cldfra1
CALL cldfra1
CALL cldfra1
CALL cldfra1
CALL rrtm
CAM-CLWRF interpolated values______ year: 2017 julian day: 256.751740
CAM-CLWRF co2vmr: 3.7900000461377203E-004 n2ovmr: 3.1900000863060995E-007 ch4vmr: 1.7739999975674436E-006
CALL swrad
calling inc/HALO_PWP_inline.inc
call surface_driver
in SFCLAY
in NOAH DRV
call pbl_driver
in ACM PBL
************************************************************************

What might be the problem?

Thanks,
David
 
Wanted to add that in the terminal window where I'm running wrf, I see the following line when the simulation quits:

Note: The following floating-point exceptions are signalling: IEEE_INVALID_FLAG IEEE_DIVIDE_BY_ZERO IEEE_UNDERFLOW_FLAG IEEE_DENORMAL



Also, I tried to run the simulation with a single nest and with two nests, and it seems to run fine for several hours. Only when the third nest is added, the simulation quits quickly.

David
 
Hi, David,
There arte two issues I am concerned:
(1) The ACM PBL scheme is recommended to pair with ACM land surface and surface layer schemes, that is,
Code:
sf_sfclay_physics = 7, 7, 7, 
sf_surface_physics = 7, 7, 7, 
bl_pbl_physics = 7, 7, 7,
Please try the above options with just a single-domain run, and let me know whether it works.

(2) if you run with triply-nested case, did the case crash immediately? How bigs is your case (I mean grid numbers)?
 
Thank you Ming Chen.

(1) I see that the problem with the LSM option #7 (PX LSM), is that it requires 'flag_imperv' and 'flag_canfra'.
Is that right?

(2) I am trying to run a 3 nests case. It crushes after just few minutes . The nests are not that big:
e_we = 178, 130, 121,
e_sn = 220, 127, 121,

When I tried running one and two nests only (with the original specified namelist.input options), it did ran for several hours...

David
 
Ming Chen said:
Hi, David,
There arte two issues I am concerned:
(1) The ACM PBL scheme is recommended to pair with ACM land surface and surface layer schemes, that is,
Code:
sf_sfclay_physics = 7, 7, 7, 
sf_surface_physics = 7, 7, 7, 
bl_pbl_physics = 7, 7, 7,
Please try the above options with just a single-domain run, and let me know whether it works.

(2) if you run with triply-nested case, did the case crash immediately? How bigs is your case (I mean grid numbers)?

**************************************************************************************************************************************
I understand that the PX surface scheme requires CANFRA and IMPERV as input, and that this is given in nlcd2011_can_ll_9s and nlcd2011_imp_ll_9s static data. However, this static data is available only for the US (not my region).
Does it mean that currently the option of using ACM2 PBL scheme with surface scheme other than PX is not available/feasible?
At least with the options I tried:
sf_sfclay_physics = 1, 1, 1,
sf_surface_physics = 2, 2, 2,
bl_pbl_physics = 7, 7, 7,

David
 
Hi, David,
You are right that CANFRA and IMPERV are only available in nlcd2011_can_ll_9s and nlcd2011_imp_ll_9s static data. However, note that this doesn't affect the application of ACM scheme in other regions of the world. This is because if CANFRA and IMPERV are not available in areas outside CONUS, their values will be zero, which means that these two variables will have no impact on the simulation.
 
Thank you Ming Chen,

I did try to implement the settings:
sf_sfclay_physics = 7, 7, 7,
sf_surface_physics = 7, 7, 7,
bl_pbl_physics = 7, 7, 7,

and I have also downloaded the nlcd2011_can_ll_9s and nlcd2011_imp_ll_9s static data.

But unfortunately, the run quits as well, with the same "complain" of
Note: The following floating-point exceptions are signalling: IEEE_INVALID_FLAG IEEE_UNDERFLOW_FLAG IEEE_DENORMAL

David
 
An update:

Reducing the number of vertical levels close to the ground seems to solve the issue.
The simulation completes successfully.

David
 
David,
Thanks for the update. Would you please send me the vertical levels you used in the failed case and the new levels that make the case work?
I would like to take a look and try to understand how the vertical settings would affect the model performance.
 
Sure.
The original eta levels I tried to use:
eta_levels = 1.000, 0.999, 0.9982, 0.9973, 0.9965, 0.995, 0.9935, 0.9920, 0.99,
0.987, 0.98299998, 0.976, 0.97000003, 0.962,
0.954, 0.944, 0.93400002, 0.922, 0.90899998,
0.895, 0.875, 0.84781724, 0.81563461,
0.78345191, 0.74126916, 0.69259715, 0.63746947, 0.58570772,
0.53714091, 0.49160549, 0.4489449 , 0.40900937, 0.37165567,
0.33674687, 0.30415204, 0.27374613, 0.2454097 , 0.21902865,
0.19449411, 0.17170216, 0.15055367, 0.13095412, 0.11281338,
0.09645368, 0.08171168, 0.06842741, 0.05645673, 0.04566974,
0.03594941, 0.02719025, 0.01929723, 0.01218469, 0.00577546, 0.000

(This setting worked with YSU, BouLac and MYNN2).


In the reduced setting, I omitted 4 levels close to the ground, so the first levels are:
eta_levels = 1.000, 0.9955, 0.9945, 0.9920, 0.99, ...

David
 
Top