Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

"segmentation fault" in using Deng shallow cumulus


New member
As I was using WRF-Solar shcu_physics=5 (Deng) to simulate the shortwave radiation conditions over China, I encountered a "segmentation fault".
The main physics settings for the simulation were as follows. When I change the shcu_physics to 2 (Park and Bretherton) or 3 (GRIMS), the simulation runs properly. I know that the Deng shallow cumulus has a high performance thus I want use it in the simulation.
Can you give me some advice about this? The complete namelist input are attached.
mp_physics = 8, 8,
ra_lw_physics = 4, 4,
ra_sw_physics = 4, 4,
radt = 5,
swint_opt = 2,
aer_opt = 1,
sf_sfclay_physics = 1, 1,
sf_surface_physics = 2, 2,
sf_urban_physics = 0, 0,
num_soil_layers = 4,
num_land_cat = 21,
bl_pbl_physics = 5, 5,
bldt = 0,
bl_mynn_tkebudget = 1, 1,
bl_mynn_tkeadvect = .true.,
bl_mynn_edmf = 0, 0,
cu_physics = 0, 0,
cudt = 0,
cu_rad_feedback = .false.,
shcu_physics = 5, 5,
isfflx = 1,
ifsnow = 1,
icloud = 1,
icloud_bl = 0,
num_land_cat = 21,
usemonalb = .true.,
sst_update = 1,


  • namelist.input
    4.7 KB · Views: 8
  • error.png
    54.5 KB · Views: 19
Your namelist options all look fine. it seems that the model crashed immediately after it started (please let me know if I am wrong). This often indicates that either the input data are wrong, or the memory is not sufficient for running the case. Let's try the options below:
(1) increase the number of processors and rerun this case. More processors should give you larger memory.
(2) if the case still failed, probably you can create a smaller case (i.e., with smaller grid numbers) and see whether it works. If the small case can run, it indicates that the failure is caused by memory
(3) if the small case also crashes immediately, then I guess something is wrong in the physics. Please recompile WRF with the debug model, i.e., ./configure -D, and run the case again. The log file will tell exactly when and where something first goes wrong. Such information is helpful for debugging possible problems.
Thanks for your advices. We have try the following options:

Increasing the number of the processors, but it seems to have no effects.

Increasing the dx/dy from 9 km to 15 km to run the same domain (over China), the same “segmentation fault” occurs.

Reducing the domain size, i.e. over eastern China (dx/dy=9 km or dx/dy=3 km), and the model can run successfully.

Through these attempts, we thought that there are some errors occurring in Deng shallow cumulus when it used in Tibet Plateau (complex terrain areas). Can you give us some advice for this?

The complete name list input files are attached.


  • namelist.input
    5.4 KB · Views: 9
I would suggest to increase the value of epssm, e.g.,

epssm = 0.9, 0.9,

Then try again. This option can help stabilize the numerical integration over large topography area, especially the Tibet.

Hope this is helpful for you.