Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

Time step issues in simulation of 3 nested domains

Afernandez

New member
Hi,

I'm trying to run a 6-hour simulation using MPI-ESM1-2-HR using 3 nested domains. The output should be every hours. The model crashes when running the innermost (d03) domain. When I check the rsl* files, they show a bunch of CFL issues apparently related with the vertical velocity. The topography of the region is complex. I have tried reducing the time step as low as 20s to no avail. I cannot make my model top higher than 5000 because that is the top level of the original input GCM data.

I would really appreciate some advise on how to do this since my aim is to run a longer simulations using a larger subset of these data
 

Attachments

  • namelist.input
    6.8 KB · Views: 3
  • rsl.error.0000
    1.9 MB · Views: 0
Code:
&dynamics
 w_damping                           = 0,                                          ! W damping option
 diff_opt                            = 2,                                          ! Diffusion option
 mix_full_fields                     = .true.,                                     ! Mix full fields option
 tke_drag_coefficient                = 0.,  0.,                                    ! TKE drag coefficient
 tke_heat_flux                       = 0.,  0.,                                   ! TKE heat flux
 km_opt                              = 4,   4,                                   ! Mixing length option
 diff_6th_opt                        = 0,        0,                             ! 6th-order diffusion option for each domain
 diff_6th_factor                     = 0.12,     0.12,                         ! 6th-order diffusion factor for each domain
 base_temp                           = 290.                                         ! Base temperature for potential temperature (in K)
 damp_opt                            = 3,                                          ! Damping option
 zdamp                               = 5000.,  5000.,                         ! Damping coefficient for each domain
 dampcoef                            = 0.2,      0.2,                         ! Damping coefficient for each domain
 khdif                               = 0,        0,                            ! Horizontal diffusion coefficient for each domain
 kvdif                               = 0,        0,                           ! Vertical diffusion coefficient for each domain
 non_hydrostatic                     = .true.,   .true.,                       ! Non-hydrostatic option for each domain
 epssm                               = 0.9,       0.9,                          ! time off-centering for vertical sound waves
 moist_adv_opt                       = 1,        1,                          ! Moisture advection option for each domain
 scalar_adv_opt                      = 1,        1,                            ! Scalar advection option for each domain
 use_theta_m                         = 1,                             ! Use moist theta option
 hybrid_opt                          = 2,                             ! Hybrid eta coordinate option
 gwd_opt                             = 1,                             ! Gravity wave drag option

Here's my dynamics options for when I run WRF over nepal. Maybe something there can help you @Afernandez
 
Hi William.Hatheway, thanks a lot for this. It worked great in my test case.

Do you know, by any chance, what the "Caught signal 11 (segmentation fault...)" may mean when the model is running "HALO_CUP_G3_IN_inline.inc mskf_cps_mp"? I created a new set of wrfinput from CMIP6 model data with nearly the same characteristics as my test case. The biggest change is that I'm using 14 instead of 15 metgrid levels due to restrictions in my wind data (I had to interpolate between geopotential levels to match the levels of the other 3d variables). Real runs fine but wrf.exe stops during the first time step. I checked my met_em, wrfbdy, and wrfinput, and all look OK, but 19 of my rsl files all show the following in the last lines, after "mskf_cps_mp".


d01 2005-01-01_06:00:00 call cumulus_driver
d01 2005-01-01_06:00:00 calling inc/HALO_CUP_G3_IN_inline.inc
d01 2005-01-01_06:00:00 in mskf_cps_mp
[f0524:155901:0:155901] Caught signal 11 (Segmentation fault: address not mapped to object at address 0xfffffffe06a54480)
==== backtrace (tid: 155901) ====
0 /lib64/libucs.so.0(ucs_handle_error+0x2dc) [0x14f905665e4c]
1 /lib64/libucs.so.0(+0x2c02c) [0x14f90566602c]
2 /lib64/libucs.so.0(+0x2c1fa) [0x14f9056661fa]
3 /home/titan/gwgk/gwgk101h/WRF_MODEL/WRF/test/em_real/wrf.exe() [0x2f74384]
4 /home/titan/gwgk/gwgk101h/WRF_MODEL/WRF/test/em_real/wrf.exe() [0x2f4d98a]
5 /home/titan/gwgk/gwgk101h/WRF_MODEL/WRF/test/em_real/wrf.exe() [0x286471f]
6 /home/titan/gwgk/gwgk101h/WRF_MODEL/WRF/test/em_real/wrf.exe() [0x1fa5d10]
7 /home/titan/gwgk/gwgk101h/WRF_MODEL/WRF/test/em_real/wrf.exe() [0x1736c5b]
8 /home/titan/gwgk/gwgk101h/WRF_MODEL/WRF/test/em_real/wrf.exe() [0x1512b28]
9 /home/titan/gwgk/gwgk101h/WRF_MODEL/WRF/test/em_real/wrf.exe() [0x5b6b61]
10 /home/titan/gwgk/gwgk101h/WRF_MODEL/WRF/test/em_real/wrf.exe() [0x4145a1]
11 /home/titan/gwgk/gwgk101h/WRF_MODEL/WRF/test/em_real/wrf.exe() [0x414554]
12 /home/titan/gwgk/gwgk101h/WRF_MODEL/WRF/test/em_real/wrf.exe() [0x4144e2]
13 /lib64/libc.so.6(__libc_start_main+0xe5) [0x14fa934a2d85]
14 /home/titan/gwgk/gwgk101h/WRF_MODEL/WRF/test/em_real/wrf.exe() [0x4143ee]
=================================
forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image PC Routine Line Source
libpnetcdf.so.4.0 000014FA97505359 for__signal_handl Unknown Unknown
libpthread-2.28.s 000014FA93A43CF0 Unknown Unknown Unknown
wrf.exe 0000000002F74384 Unknown Unknown Unknown
wrf.exe 0000000002F4D98A Unknown Unknown Unknown
wrf.exe 000000000286471F Unknown Unknown Unknown
wrf.exe 0000000001FA5D10 Unknown Unknown Unknown
wrf.exe 0000000001736C5B Unknown Unknown Unknown
wrf.exe 0000000001512B28 Unknown Unknown Unknown
wrf.exe 00000000005B6B61 Unknown Unknown Unknown
wrf.exe 00000000004145A1 Unknown Unknown Unknown
wrf.exe 0000000000414554 Unknown Unknown Unknown
wrf.exe 00000000004144E2 Unknown Unknown Unknown
libc-2.28.so 000014FA934A2D85 __libc_start_main Unknown Unknown
wrf.exe 00000000004143EE Unknown Unknown Unknown
 
Hi William.Hatheway, thanks a lot for this. It worked great in my test case.

Do you know, by any chance, what the "Caught signal 11 (segmentation fault...)" may mean when the model is running "HALO_CUP_G3_IN_inline.inc mskf_cps_mp"? I created a new set of wrfinput from CMIP6 model data with nearly the same characteristics as my test case. The biggest change is that I'm using 14 instead of 15 metgrid levels due to restrictions in my wind data (I had to interpolate between geopotential levels to match the levels of the other 3d variables). Real runs fine but wrf.exe stops during the first time step. I checked my met_em, wrfbdy, and wrfinput, and all look OK, but 19 of my rsl files all show the following in the last lines, after "mskf_cps_mp".


d01 2005-01-01_06:00:00 call cumulus_driver
d01 2005-01-01_06:00:00 calling inc/HALO_CUP_G3_IN_inline.inc
d01 2005-01-01_06:00:00 in mskf_cps_mp
[f0524:155901:0:155901] Caught signal 11 (Segmentation fault: address not mapped to object at address 0xfffffffe06a54480)
==== backtrace (tid: 155901) ====
0 /lib64/libucs.so.0(ucs_handle_error+0x2dc) [0x14f905665e4c]
1 /lib64/libucs.so.0(+0x2c02c) [0x14f90566602c]
2 /lib64/libucs.so.0(+0x2c1fa) [0x14f9056661fa]
3 /home/titan/gwgk/gwgk101h/WRF_MODEL/WRF/test/em_real/wrf.exe() [0x2f74384]
4 /home/titan/gwgk/gwgk101h/WRF_MODEL/WRF/test/em_real/wrf.exe() [0x2f4d98a]
5 /home/titan/gwgk/gwgk101h/WRF_MODEL/WRF/test/em_real/wrf.exe() [0x286471f]
6 /home/titan/gwgk/gwgk101h/WRF_MODEL/WRF/test/em_real/wrf.exe() [0x1fa5d10]
7 /home/titan/gwgk/gwgk101h/WRF_MODEL/WRF/test/em_real/wrf.exe() [0x1736c5b]
8 /home/titan/gwgk/gwgk101h/WRF_MODEL/WRF/test/em_real/wrf.exe() [0x1512b28]
9 /home/titan/gwgk/gwgk101h/WRF_MODEL/WRF/test/em_real/wrf.exe() [0x5b6b61]
10 /home/titan/gwgk/gwgk101h/WRF_MODEL/WRF/test/em_real/wrf.exe() [0x4145a1]
11 /home/titan/gwgk/gwgk101h/WRF_MODEL/WRF/test/em_real/wrf.exe() [0x414554]
12 /home/titan/gwgk/gwgk101h/WRF_MODEL/WRF/test/em_real/wrf.exe() [0x4144e2]
13 /lib64/libc.so.6(__libc_start_main+0xe5) [0x14fa934a2d85]
14 /home/titan/gwgk/gwgk101h/WRF_MODEL/WRF/test/em_real/wrf.exe() [0x4143ee]
=================================
forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image PC Routine Line Source
libpnetcdf.so.4.0 000014FA97505359 for__signal_handl Unknown Unknown
libpthread-2.28.s 000014FA93A43CF0 Unknown Unknown Unknown
wrf.exe 0000000002F74384 Unknown Unknown Unknown
wrf.exe 0000000002F4D98A Unknown Unknown Unknown
wrf.exe 000000000286471F Unknown Unknown Unknown
wrf.exe 0000000001FA5D10 Unknown Unknown Unknown
wrf.exe 0000000001736C5B Unknown Unknown Unknown
wrf.exe 0000000001512B28 Unknown Unknown Unknown
wrf.exe 00000000005B6B61 Unknown Unknown Unknown
wrf.exe 00000000004145A1 Unknown Unknown Unknown
wrf.exe 0000000000414554 Unknown Unknown Unknown
wrf.exe 00000000004144E2 Unknown Unknown Unknown
libc-2.28.so 000014FA934A2D85 __libc_start_main Unknown Unknown
wrf.exe 00000000004143EE Unknown Unknown Unknown
Good afternoon @Afernandez
Since this is a new issue can you post it in a new topic? NCAR likes to keep different issues seperated.
 
Top