Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

cu_physics=3

Giamik

New member
I am running WRF v4.5 in 2 domains(one nested) configuration using GFS data. They cover the Black and Caspian Seas (the first one with a 9km resolution) and Georgia (the nested one with a 3km resolution).
By the user guide recommendation (WRF Physics — WRF Users Guide documentation)
"Domains with grid spacing >=3km to <=10km: This is a “gray zone” where cumulus parameterization may or may not be necessary. If possible, try to avoid domains this size, but if it is unavoidable, it is best to use either the Multi-scale Kain Fritsch or Grell-Freitas scheme, as these take this scale into account."
When I use the Grell-Freitas scheme (cu_physics=3) real.exe runs successfully but wrf.exe stops at the very beginning with the error:
------------------------------------------------------------------------------------------------------------------
d02 2023-07-21_00:00:12 calling inc/PERIOD_BDY_EM_MOIST_OLD_inline.inc
d02 2023-07-21_00:00:12 calling inc/HALO_EM_A_inline.inc
d02 2023-07-21_00:00:12 calling inc/PERIOD_BDY_EM_A_inline.inc
d02 2023-07-21_00:00:12 calling inc/HALO_EM_PHYS_A_inline.inc
d02 2023-07-21_00:00:12 Top of Radiation Driver
d02 2023-07-21_00:00:12 calling inc/HALO_PWP_inline.inc
forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image PC Routine Line Source
wrf.exe 000000000386115A for__signal_handl Unknown Unknown
------------------------------------------------------------------------------------------------------------------
When I use Multi-scale Kain Fritsch (cu_physics=11) real.exe does not run!
But with cu_physics=1 and cu_physics=5 model runs flawlessly!
Here are my namelist files.

Can anyone help me?
 

Attachments

  • d01_Sfcmap20230721_03_GrADS.png
    d01_Sfcmap20230721_03_GrADS.png
    381.6 KB · Views: 10
  • d02_Sfcmap20230721_03_GrADS.png
    d02_Sfcmap20230721_03_GrADS.png
    273.6 KB · Views: 10
  • namelist.input
    3.6 KB · Views: 14
  • namelist.wps
    996 bytes · Views: 2
Good afternoon @Giamik

I'm not very familiar with the physics parameterizations but I can at least show you what I do with my double nest 9km/3km over Texas.

cu_physics = 11, 0,

The Multi-scale Kain Fritsch scheme is used for the grey zone of 9km. Then 0 for 3km since that is where convection is explicitly resolved.
 
Many thanks, Whatheway,

Please show me Your namelist.input.

Best regards
George
@Giamik

I can't show all of it but I can show part of it.

Code:
 &physics
 physics_suite                       = 'CONUS'
 mp_physics                          = 28,   28,   
 use_aero_icbc                       = .F.
 aer_opt                             = 0
 ra_lw_physics                       = 4,     4, 
 ra_sw_physics                       = 4,     4,
 levsiz                              = 59
 paerlev                             = 29
 cam_abs_dim1                        = 4
 cam_abs_dim2                        = 45
 radt                                = 10,    10,
 sf_sfclay_physics                   = 4      4,
 sf_surface_physics                  = 4,     4,
 sf_urban_physics                    = 0,     0,
 bl_pbl_physics                      = 4,     4,
 bldt                                = 0,     0,
 cu_physics                          = 11,    0,
 cudt                                = 5,     
 isfflx                              = 1,
 ifsnow                              = 1,
 icloud                              = 1,
 surface_input_source                = 1,
 num_soil_layers                     = 4,
 num_land_cat                        = 21,
 usemonalb                           = .true.
 sst_update                 = 1
 tmn_update                          = 1,
 lagday                              = 150,

/
 
Many thanks, Whatheway,
Can You show me &dynamics and &noah_mp sections?
for my namelist, it doesn't work!
&physics
mp_physics = 28, 28,
ra_lw_physics = 4, 4,
ra_sw_physics = 4, 4,
radt = 9, 9,
sf_sfclay_physics = 4, 4,
sf_surface_physics = 4, 4,
bl_pbl_physics = 4, 4,
bldt = 0, 0,
cu_physics = 11, 0,
cudt = 0, 0,
num_soil_layers = 4,
prec_acc_dt = 60, 60,
usemonalb = .true.,
isfflx = 1,
ifsnow = 0,
icloud = 1,
sf_urban_physics = 0, 0,
/
Thanks in advance
 
Many thanks, Whatheway,
Can You show me &dynamics and &noah_mp sections?
for my namelist, it doesn't work!
&physics
mp_physics = 28, 28,
ra_lw_physics = 4, 4,
ra_sw_physics = 4, 4,
radt = 9, 9,
sf_sfclay_physics = 4, 4,
sf_surface_physics = 4, 4,
bl_pbl_physics = 4, 4,
bldt = 0, 0,
cu_physics = 11, 0,
cudt = 0, 0,
num_soil_layers = 4,
prec_acc_dt = 60, 60,
usemonalb = .true.,
isfflx = 1,
ifsnow = 0,
icloud = 1,
sf_urban_physics = 0, 0,
/
Thanks in advance
Code:
 &noah_mp
 dveg                               = 4,     
 opt_crs                            = 1, 
 opt_sfc                            = 1
 opt_btr                            = 1,
 opt_run                            = 3,
 opt_frz                            = 1,
 opt_inf                            = 1,
 opt_rad                            = 3,
 opt_alb                            = 2,
 opt_snf                            = 4,
 opt_tbot                           = 1,
 opt_stc                            = 3,
 opt_gla                            = 1,
 opt_rsf                            = 1,
/

 &fdda
 /

 &dynamics
 w_damping                           = 0,
 diff_opt                            = 2,
 mix_full_fields                     = .true.,
 tke_drag_coefficient                = 0.,       
 tke_heat_flux                       = 0.,       
 km_opt                              = 4,         
 diff_6th_opt                        = 0,       
 diff_6th_factor                     = 0.12,     
 base_temp                           = 290.
 damp_opt                            = 3,
 zdamp                               = 5000., 5000., 
 dampcoef                            = 0.2,     
 khdif                               = 0,         
 kvdif                               = 0,           
 epssm                               = 0.5, 0.5,     
 non_hydrostatic                     = .true., .true.,
 moist_adv_opt                       = 1,  1,         
 scalar_adv_opt                      = 1,  1,
 use_theta_m                         = 1,                             
 hybrid_opt                          = 2,                             
 gwd_opt                             = 1,                                         
/
 
Your namelist.input looks fine and I didn't see anything wrong. However, there are two issues I am concerned:
(1) you run this case with adaptive time step, which sometimes can cause problem during the integration. Can you turn off this option?
(2) you have 75 vertical levels with p_top at 50hPa, indicating that the vertical resolution is way high. Can you reduce the vertical levels to a smaller value, e.g., 51 or 46, then try again?
 
Your namelist.input looks fine and I didn't see anything wrong. However, there are two issues I am concerned:
(1) you run this case with adaptive time step, which sometimes can cause problem during the integration. Can you turn off this option?
(2) you have 75 vertical levels with p_top at 50hPa, indicating that the vertical resolution is way high. Can you reduce the vertical levels to a smaller value, e.g., 51 or 46, then try again?
Many thanks, Ming,
When I use Multi-scale Kain Fritsch (cu_physics=11) scheme real.exe. stops at the very beginning, without the creation of any wrfbdy_??? and wrfinput_d0? (boundary and initial) files! Do you think this happens because of the adaptive time step? Or this comment concerns to Grell-Freitas scheme?

Thanks in advance
 
Dear Ming,
I removed the adaptive time step from the namelist and used various time steps (30, 20, 10) and vertical levels (60, 50, 46) with Grell-Freitas scheme (cu_physics=3). The real.exe executes flawlessly but wrf.exe stops at the very beginning with this output:
----------------------------------------------------------------------------------------------------------------------------
d02 2023-07-25_00:00:00 calling inc/HALO_EM_MOIST_E_5_inline.inc
d02 2023-07-25_00:00:00 calling inc/HALO_EM_SCALAR_E_5_inline.inc
Timing for main: time 2023-07-25_00:00:03 on domain 2: 1.37573 elapsed seconds
d02 2023-07-25_00:00:03+01/03 calling inc/HALO_EM_MOIST_OLD_E_7_inline.inc
d02 2023-07-25_00:00:03+01/03 calling inc/PERIOD_BDY_EM_MOIST_OLD_inline.inc
d02 2023-07-25_00:00:03+01/03 calling inc/HALO_EM_A_inline.inc
d02 2023-07-25_00:00:03+01/03 calling inc/PERIOD_BDY_EM_A_inline.inc
d02 2023-07-25_00:00:03+01/03 calling inc/HALO_EM_PHYS_A_inline.inc
d02 2023-07-25_00:00:03+01/03 Top of Radiation Driver
d02 2023-07-25_00:00:03+01/03 calling inc/HALO_PWP_inline.inc
-----------------------------------------------------------------------------------------------------------------------------
 
Such kind of error message cannot provide any helpful information for us to figure out what is wrong. However, if the model crashes immediately with segmentation fault, it often indicates that either the input data is wrong, or there is no sufficient memory for running the case. Since you can run the same case with other options, I suppose the input data is correct. Please try with more number of processors (which will give you larger memory) and hope this case would work.

It it still fails, I would suggest that you recompile WRF in debug mode, i.e., ./configure -D, then rerun this case with debug_level = 0.
The rsl files will contain information where the model crashed first. This will us some hints why the model crashed.
 
Hi Ming,
When I run the model I set "ulimit" values to Unlimited ($ ulimit -s unlimited).
Then I run the model:
$ mpirun -n 4 ./wrf.exe (I tried 10 CPUs but the results are same)
memory usage is not more than 60% of total RAM (16Gb).
but the output is:
starting wrf task 0 of 4
starting wrf task 1 of 4
starting wrf task 2 of 4
starting wrf task 3 of 4

===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= RANK 0 PID 7025 RUNNING AT server
= KILLED BY SIGNAL: 9 (Killed)
===================================================================================

===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= RANK 2 PID 7027 RUNNING AT server
= KILLED BY SIGNAL: 9 (Killed)
===================================================================================
I have uploaded namelist and geographical files for my domain, but I have 4 more met netcdf files:

24989890 met_em.d01.2023-07-26_00:00:00.nc
24992081 met_em.d01.2023-07-26_03:00:00.nc
11644334 met_em.d02.2023-07-26_00:00:00.nc
11704647 met_em.d02.2023-07-26_03:00:00.nc
But I couldn't upload them due to their large size. (If You give me the right to upload such large files I'll upload them also!)
This data are correct because if one changes cu_physics=3 on 5 the model completes successfully!
 

Attachments

  • geo_em_d02.tar.bz2
    2.6 MB · Views: 0
  • geo_em_d01.tar.bz2
    5.7 MB · Views: 0
  • namelist.input
    3.6 KB · Views: 5
but even it can't help! I tried time_step = 30, and 20 also!

The files you uploaded could not be used.

One because of netcdf mismatch. Two -- don't know why your met_em files have different sizes for different times for the same domain. I have never seen that.

Adjusted my namelist files to match your domain and configuration.
The attached namelists works -- real rsl file is attached
Ran wrf for a few minutes only and it seems to run okay

Hope this helps. All the best.
 

Attachments

  • namelist.input
    6.5 KB · Views: 1
  • namelist.wps
    939 bytes · Views: 1
  • rsl.realem.error.0000
    28.6 KB · Views: 1
  • rsl.wrf.error.0000
    30.7 KB · Views: 0
Would you please change the options below from:

cu_rad_feedback = .true., .false.,

cu_diag = 1, 0,

to

cu_rad_feedback = .false., .false.,

cu_diag = 0, 0,

Then try again? Please let me know whether it works.
 
Top