Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

WRFv4.3.2 - Fail at CALL rrtmg_lw

This post was from a previous version of the WRF&MPAS-A Support Forum. New replies have been disabled and if you have follow up questions related to this post, then please start a new thread from the forum home page.

ggraywx

New member
Greetings All-

I am attempting to initialize WRF with ERA5 data. I have been successful before with a different case study, but I am having some trouble with this one. I can't seem to get wrf.exe to get past the RRTMG_LW call.

Troubleshooting attempts:
1) Changed timestep from 45 down to 30, and then 20.
2) Increased e_vert from 40 to 70.
3) Changed radiation from RRTMG(4) to RRTM/Dudia (1) <- This changes the error, but still fails with Longwave calculation.
4) Changed cu_physics from: Tiedke (6) to MSKF (11)
5) Tested with WRFv4.2.1, failed at same CALL rrtmg_lw
6) My metgrids look fine to me. I checked: Soil temperature, Soil Moisture, Pressure, Heights, Land Use... I'll attach them in case someone else has and idea.

Error from WRFv4.2.1
Code:
calling inc/HALO_EM_PHYS_A_inline.inc
 call phy_prep
 DEBUG wrf_timetoa():  returning with str = [2016-07-30_06:00:00]
 call radiation_driver
Top of Radiation Driver
CALL cldfra1
use kf cldfra
CALL rrtmg_lw

Error from WRFv4.3.2
Code:
d01 2016-07-30_06:00:00 mminlu = 'MODIFIED_IGBP_MODIS_NOAH'
Timing for processing lateral boundary for domain        1:    0.65598 elapsed seconds
d01 2016-07-30_06:00:00 module_integrate: calling solve interface
 Tile Strategy is not specified. Assuming 1D-Y
WRF TILE   1 IS      1 IE     40 JS      1 JE     16
WRF NUMBER OF TILES =   1
d01 2016-07-30_06:00:00 calling inc/HALO_EM_MOIST_OLD_E_7_inline.inc
d01 2016-07-30_06:00:00 calling inc/PERIOD_BDY_EM_MOIST_OLD_inline.inc
d01 2016-07-30_06:00:00 calling inc/HALO_EM_A_inline.inc
d01 2016-07-30_06:00:00 calling inc/PERIOD_BDY_EM_A_inline.inc
d01 2016-07-30_06:00:00 calling inc/HALO_EM_PHYS_A_inline.inc
d01 2016-07-30_06:00:00 Top of Radiation Driver
d01 2016-07-30_06:00:00 CALL cldfra1
d01 2016-07-30_06:00:00 use kf cldfra
d01 2016-07-30_06:00:00 CALL rrtmg_lw

If anyone has any ideas on the origin of this error, I would be very grateful. Thank you!
Best,
Geneva
PS: I can't seem to get nextcloud to work for me, so here is a google drive link to the met_em. I hope that is allowed
ggray_met_em.tar.gz
 

Attachments

  • rsl.out.0000.txt
    18.2 KB · Views: 26
  • namelist.input.txt
    5.2 KB · Views: 31
Hi,
Your namelist.input looks fine. I have a few questions:
(1) where did you download ERA5 data? Are the data on pressure level or model level?
(2) If the case failed immediately, it often indicates a memory issue or the input data might have problems. Since this is a triply-nested case, can you try to run over a single domain? This will tell whether the memory is not sufficient since the D03 has a large number of grids.
 
Thanks for the response, Ming Chen!

1) The ERA5 data is model level (and surface level) data queried through Copernicus. I'm attempting to initialize using the first ensemble member (not the reanalysis). I ultimately want to replicate this all ERA5 ensemble members (10 total). I've run ungrib and calc_ecmwf_p.exe, this is were I'm currently focused on debugging but I may be off since it is failing at the wrf.exe step and not earlier.

2) I just tried to run in with just the outer domain. It failed at the same point in the process.

When playing with the different switches in the namelist (using cu_physics= tiedke, and ra_lw_physics/ra_sw_physics = RRTMG), I did get it past CALL rrtmg_lw, but not much further. Maybe this can offer more insight.

Code:
d01 2016-07-30_06:00:00 calling inc/HALO_EM_MOIST_OLD_E_7_inline.inc
d01 2016-07-30_06:00:00 calling inc/PERIOD_BDY_EM_MOIST_OLD_inline.inc
d01 2016-07-30_06:00:00  call rk_step_prep
d01 2016-07-30_06:00:00 calling inc/HALO_EM_A_inline.inc
d01 2016-07-30_06:00:00 calling inc/PERIOD_BDY_EM_A_inline.inc
d01 2016-07-30_06:00:00  call rk_phys_bc_dry_1
d01 2016-07-30_06:00:00  call init_zero_tendency
d01 2016-07-30_06:00:00 calling inc/HALO_EM_PHYS_A_inline.inc
d01 2016-07-30_06:00:00  call phy_prep
d01 2016-07-30_06:00:00  DEBUG wrf_timetoa():  returning with str = [2016-07-30_06:00:00]
d01 2016-07-30_06:00:00  call radiation_driver
d01 2016-07-30_06:00:00 Top of Radiation Driver
d01 2016-07-30_06:00:00 CALL cldfra1
d01 2016-07-30_06:00:00 CALL rrtmg_lw
d01 2016-07-30_06:00:00 CALL rrtmg_sw
d01 2016-07-30_06:00:00 calling inc/HALO_PWP_inline.inc

Thanks for the help!
 
If the case still failed when running over a single domain, then I guess this is not a memory issue. It is more likely that your input data is not correct.
NCAR provides ERA5 data on pressure level (please see https://rda.ucar.edu/datasets/ds630.0/). We know that this dataset works fine for driving WRF run. Probably you can try this data first. I understand that this dataset is different to what you are using. But comparing the input data in met-em and/or wrfinout at least can give you some hints whether something is wrong with the ensemble ERA5.
 
Thank you for your advice, Ming Chen.

WRFv4.3.2 is currently running using the pressure level ERA5. While I wanted the enhanced vertical levels (138!) from the model level ERA5, the pressure level data are fine for now.

I'll report back and see if I find any issues when comparing the met-ems produced by the two different types of data in case that helps future users.
 
Hi, Geneva,
Thanks for the kind message. Please keep me updated about the progress using ERA5 ensemble data in model levels.
 
Top