Hi all,
I'm currently running WRFv4.4.1.
This for a retrospective simulation forced with GFS (analysis and archived forecast; d084001).
Domain is 480x490 at 10km resolution with 41 levels.
The runs segfault around 15:00 - 17:00.
Things I've tried:
- Increase domain extent (boundaries away from complex topography) - this worked and stopped getting cu scheme segfaults
- Increase domain height (ptop 7000pa to 5000pa), 35 to 41 levels - this also helped with cu scheme segfaults
- Went from mp scheme WDM5 to Morrison (both) to now Thompson (option 8) - this helped for a single day, segfault the next day.
- Reduced time-step from 30 to 25 and now to 20 - helps for a day or two but now even 20 gives segfaults.
Currently the segfaults seem to occur while in SFCLAY (I'm using "revised MM5" scheme with Noah LSM).
I've had a looked at the segfault thread but it doesn't seem to apply to my run.
forum.mmm.ucar.edu
1) Halos should be large enough with 48 processors
2) Plenty available disk space
3) Segfault is towards the end of runs. I did look at REAL output and there's no NaNs in surface variables.
4) There's no listed CFL errors in the rsl files
5) There should be enough memory (at least 2x64GB).
I've attached my namelist.input and the rsl.error file for the core with the segfault.
I'm not sure what else to try. Maybe increase resolution to 9km? The physics/dynamics set up has worked many times before for different domains (smaller) and resolutions (higher).
I'm currently running WRFv4.4.1.
This for a retrospective simulation forced with GFS (analysis and archived forecast; d084001).
Domain is 480x490 at 10km resolution with 41 levels.
The runs segfault around 15:00 - 17:00.
Things I've tried:
- Increase domain extent (boundaries away from complex topography) - this worked and stopped getting cu scheme segfaults
- Increase domain height (ptop 7000pa to 5000pa), 35 to 41 levels - this also helped with cu scheme segfaults
- Went from mp scheme WDM5 to Morrison (both) to now Thompson (option 8) - this helped for a single day, segfault the next day.
- Reduced time-step from 30 to 25 and now to 20 - helps for a day or two but now even 20 gives segfaults.
Currently the segfaults seem to occur while in SFCLAY (I'm using "revised MM5" scheme with Noah LSM).
I've had a looked at the segfault thread but it doesn't seem to apply to my run.

Segmentation Faults and CFL Errors
Segmentation faults can be difficult to track down. As there isn't usually a clear error message, it can take some trial and error to figure out the problem. 1) A segmentation fault is often the result of using too many or too few processors, or a bad decomposition. Take a look at this FAQ...

2) Plenty available disk space
3) Segfault is towards the end of runs. I did look at REAL output and there's no NaNs in surface variables.
4) There's no listed CFL errors in the rsl files
5) There should be enough memory (at least 2x64GB).
I've attached my namelist.input and the rsl.error file for the core with the segfault.
I'm not sure what else to try. Maybe increase resolution to 9km? The physics/dynamics set up has worked many times before for different domains (smaller) and resolutions (higher).