Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

WRFv3.9.1.1 Nesting I/O Problem

This post was from a previous version of the WRF&MPAS-A Support Forum. New replies have been disabled and if you have follow up questions related to this post, then please start a new thread from the forum home page.

twglotfe

New member
Hello,

I'm having trouble running a triple nested simulation (e.g., 36km, 12km, and 4km) using WRFv3.9.1.1. The model will initialize and run successfully up until the first output time then it will crash with a segmentation fault trying to write out the wrfout_d03 file. The wrfout_d03 file is created but contains no data. If I disable the inner most 4km domain the simulation will run successfully. I have also tried changing the size and position of the 4km domain but the error persists.

My code is compiled with the intel (dmpar) option 15 and additional optimization flags for the knights landing and skylake nodes on the stampede2 cluster. My namelist and configure.wrf files are attached. The code is complied with netcdf large file support on by default and my csh run script has the "limit stacksize unlimited" line.

The segmentation fault error in the rsl.error files is as follows.

forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image PC Routine Line Source
wrf.exe 0000000005DA0B2D Unknown Unknown Unknown
libpthread-2.17.s 00002ADBC9A515D0 Unknown Unknown Unknown
wrf.exe 0000000005E7DA10 Unknown Unknown Unknown
wrf.exe 000000000078E97E wrf_patch_to_glob 6703 module_dm.f90
wrf.exe 0000000001ABBBB1 collect_generic_a 21853 module_io.f90
wrf.exe 0000000001AB99A1 collect_real_and_ 21550 module_io.f90
wrf.exe 0000000001AB82DE collect_fld_and_c 21472 module_io.f90
wrf.exe 0000000001AB789D wrf_write_field1_ 21262 module_io.f90
wrf.exe 0000000001AB7475 wrf_write_field_ 21059 module_io.f90
wrf.exe 0000000001F49ADF wrf_ext_write_fie 159 wrf_ext_write_field.f90
wrf.exe 0000000001781B47 output_wrf_ 1241 output_wrf.f90
wrf.exe 000000000163A018 module_io_domain_ 53 module_io_domain.f90
wrf.exe 00000000018ECA1C open_hist_w_ 2081 mediation_integrate.f90
wrf.exe 00000000018EBD91 med_hist_out_ 892 mediation_integrate.f90
wrf.exe 00000000018E4E16 med_before_solve_ 67 mediation_integrate.f90
wrf.exe 000000000054AF0B module_integrate_ 317 module_integrate.f90
wrf.exe 000000000054B5B6 module_integrate_ 363 module_integrate.f90
wrf.exe 000000000054B5B6 module_integrate_ 363 module_integrate.f90
wrf.exe 000000000040DE34 module_wrf_top_mp 322 module_wrf_top.f90
wrf.exe 000000000040DDE4 MAIN__ 28 wrf.f90
wrf.exe 000000000040DD7E Unknown Unknown Unknown
libc-2.17.so 00002ADBC9F823D5 __libc_start_main Unknown Unknown
wrf.exe 000000000040DC69 Unknown Unknown Unknown

Any help with this issue would be greatly appreciated.
 

Attachments

  • configure.wrf.txt
    23.2 KB · Views: 53
  • namelist.input.txt
    5.6 KB · Views: 51
Your namelist indicates that you run with the CLM LSM, which requires 10 soil layers. However, num_soil_layers =4 in your namelist.input. Please change this option and try again.
 
By the way, can you look at your wrfout files for D01 and D02 from the failed case? I am curious how the model can run with a wrong setting of soil layers. I suppose this should lead to memory issue and expect the model to stop immediately. Apparently it didn't.
 
Top