Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

wrf.exe crashes with segmentation fault

This post was from a previous version of the WRF&MPAS-A Support Forum. New replies have been disabled and if you have follow up questions related to this post, then please start a new thread from the forum home page.


New member
When running wrf.exe, the process crashes immediately with a segmentation fault. I am trying to test a moving nest simulation with v4.0. I was able to run real.exe, but wrf.exe is crashing without much information. The rsl.out.0000 and rsl.error.0000 files are blank, so I have attached another rsl file which has some content.
Attached are my namelist.input, and rsl.error.0031


  • namelist.input
    4.3 KB · Views: 54
  • rsl.error.0031.rtf
    1.5 KB · Views: 70

I am unable to determine much from that rsl.error.0031.rtf file that you sent, as it doesn't seem to be the actual rsl.error.0031 file. However, in looking at your namelist, you have a DX=10000 and a time_step=180. Your time_step should not be more than 6xDX, so you should set this to something more like 60. If you issue the following:
grep cfl rsl*
you will likely see CFL errors print out. This indicates that the model has become unstable (which is a result of having a time step that is too large).

Try to make that correction and run again. If it fails again, please attach the new namelist.input file, along with the actual rsl.error.* files. You can package them all into one *.TAR file.
I made the time step 60, and the issue persisted. I have attached a tar file containing the rsl errors and my namelist.input. Thank you for your time.


  • rsl.tar
    3.3 KB · Views: 46
Thanks for sending those. Unfortunately your rsl* files do not even show that the model is trying to start running. Typically there would be some dimension information and decomposition information at the top of one of the rsl files. You should see something similar to this:
taskid: 0 hostname: node1
 module_io_quilt_old.F        2931 F
Quilting with   1 groups of   0 I/O tasks.
 Ntasks in X           16 , ntasks in Y           16
--- WARNING: traj_opt is zero, but num_traj is not zero; setting num_traj to zero.
--- NOTE: sst_update is 0, setting io_form_auxinput4 = 0 and auxinput4_interval = 0 for all domains
--- NOTE: both grid_sfdda and pxlsm_soil_nudge are 0 for domain      1, setting sgfdda interval and ending time to 0 for that domain.
--- NOTE: obs_nudge_opt is 0 for domain      1, setting obs nudging interval and ending time to 0 for that domain.
--- NOTE: bl_pbl_physics /= 4, implies mfshconv must be 0, resetting
Need MYNN PBL for icloud_bl = 1, resetting to 0
--- NOTE: RRTMG radiation is not used, setting:  o3input=0 to avoid data pre-processing
--- NOTE: num_soil_layers has been set to      4
 Parent domain
 ids,ide,jds,jde            1         440           1         220
 ims,ime,jms,jme           -4          35          -4          21
 ips,ipe,jps,jpe            1          28           1          14
DYNAMICS OPTION: Eulerian Mass Coordinate
   alloc_space_field: domain            1 ,               31921608  bytes allocated
  med_initialdata_input: calling input_input
But yours just immediately stops. This leads me to believe there is a problem on your system, perhaps with MPI (but I can't say for sure). I would advise speaking to a systems administrator at your institution to see if they can figure out what is going on.
I recompiled WRF and WPS, adjusting for vortex following nesting, something that I did not do previously (I am trying to do a moving nest). This fixed the previous problem, as wrf.exe runs. However, after the first time step, wrf.exe crashes showing a segmentation fault. I have attached the rsl files, my namelist.input, and my met_em files for you to look at.


  • rslerrors.tar.gz
    16 KB · Views: 50
Thank you for sending those. Looking through your rsl files, I see many CFL errors. I know that you already reduced your time_step, but it seems that it's still a problem. Take a look at this FAQ for some additional instructions for trying to tackle this problem: