Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

Errors in running the fast spectral bin scheme


New member

I ran a simulation with the Fast spectral bin scheme in WRF 4.0.3 and got the Segmentation fault error at the steps of "module_integrate: calling med_nest_force".

Detailed error messages and settings are in the attachments.

I would much appreciate any suggestions to deal with the error.

Warm regards,


  • namelist.input
    4.5 KB · Views: 4
  • rsl.error.0031.txt
    922.8 KB · Views: 4
1) First, please set debug_level=0. This option rarely provides any useful information and simply makes the rsl files very large and junky (difficult to read through).
2) If you use a different microphysics scheme, does the problem exist?
3) If the answer to 2) is "yes," then I think one possible issue may be that you're using too many processors to run this. Take a look at this FAQ that discusses choosing a reasonable number of processors, based on domain size.
4) If you use fewer processors and still get the same issue, can you package all the rsl* files together as a single *.tar file and attach that?

Hi Kwerner,

Thank you for your suggestions!

The first point was done. For the second, the model ran well with the Morrison scheme (mp_physics=10) in my previous tests.
About the third, I reduced the number of processors to 48 CPUs and tested with both 190 GB and 1140 GB memory requested (to make sure that sufficient computational cost is provided for this F-SBM scheme). However, in both cases, I got the same error message "Segmentation fault".

All rsl* files were compressed in the file attached.

I much appreciate your further support!


  • xrsl.tar.gz
    8.7 KB · Views: 2
I ran a case using your namelist, but with my own domain set-up and dates. I was able to run the full simulation without problems.

Just to verify - when you ran the case with Morrison, was everything exactly the same (i.e., same domain, same dates, same input data, all other settings besides mp_physics the same)?

Can you send me your wrfinput_d01 and wrfbdy_d01 files so I can try to run your exact case? Those files will likely be too large to attach. If so, take a look at the home page of this forum for information on sharing large files. Thanks!
I much appreciate your help, Kwerner! I will figure out how to share large files here. However, may I please ask about your grid setting details (grid points, vertical levels and dx), and how many computer nodes, processors and memory did you request and use to run the full simulation? Many thanks!
Last edited:
My test case was much smaller than yours. These were my settings:
e_we = 71
e_sn = 61
e_vert = 43
dx/dy = 30000
time_step = 30

I used 9 processors to run the case. I'm not sure about the memory, as I'm using our NCAR supercomputing system. I didn't request a specific amount of memory, and memory usually isn't the issue.
Hi Kwerner, thank you for your information! Please find the input files here. I was wondering if the direct use of the 1-degree resolution reanalysis data for the 1km resolution domain would cause the model instability? Your help is much appreciated!
The difference in resolution between the input data and the resolution you're using for your domain could certainly be a problem. We recommend using a difference of no greater than about 5:1 (though this doesn't have to be exact). Since 1 degree data is about 110 km, and you are using 1 km, that is a 110:1 difference. Even if you were to get the model to run, it's likely the results may be unreasonable. If you are interested in a 1km domain, you will need to have it nested inside parents. You could try domains with DX = 25000, 5000, 1000 with a parent_grid_ratio of 5:1.
Hi, thank you Kwerner for your information and suggestion! I would probably try with the large grid spacing to see if that works before running with the nests due to computing constraints.