Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

Segmentation fault when using nested domains

nahuelbautista

New member
Hello everyone,

I'm trying to run WRF-CHEM with all the chemistry options turned off, just to test that it works as expected. The idea is to add biogenic emissions later when we check that everything is ok.

The first test simulation without nesting worked fine. However, WRF-CHEM raises a segmentation fault error when adding a d02. Does anybody know why this could be? It was compiled with opt 15 (intel dmpar) as dev mode (after I got the same error in non-dev mode). The namelist.input and one rsl.error are attached.

Many thanks,

Nahuel
 

Attachments

  • namelist.input
    6.2 KB · Views: 8
  • rsl.error_0270.zip
    28.1 KB · Views: 7
Hello everyone,

I'm trying to run WRF-CHEM with all the chemistry options turned off, just to test that it works as expected. The idea is to add biogenic emissions later when we check that everything is ok.

The first test simulation without nesting worked fine. However, WRF-CHEM raises a segmentation fault error when adding a d02. Does anybody know why this could be? It was compiled with opt 15 (intel dmpar) as dev mode (after I got the same error in non-dev mode). The namelist.input and one rsl.error are attached.

Many thanks,

Nahuel
can you attaach your geogrid.log metgrid.log and ungrib.log file from the WPS folder? Might be an issue with WPS
 
Hi Will, Many thanks for your quick reply. Sure, I'm attaching the WPS logs. I used WPS 3.9.1 and WRF-CHEM 4.4.2. Could that be a problem?
 

Attachments

  • wps_logs.zip
    389.5 KB · Views: 3
Hi Will, Many thanks for your quick reply. Sure, I'm attaching the WPS logs. I used WPS 3.9.1 and WRF-CHEM 4.4.2. Could that be a problem?
Good morning,

Okay looking at the log files from WPS nothing looks out of the ordinary that I see. All say successful runs and the files were created according to the logs.

the RSL error log shows that it is looking for an auxiliary input file.
open_aux_u : error opening auxinput5_d02_2018-11-17_00:00:00 for reading. 100

this might be the cause of your problems. Are you adding additional input files for a chemistry run?

I used WPS 3.9.1 and WRF-CHEM 4.4.2. Could that be a problem?
As for this question. Personally I always keep the same versions or WPS and WRF together. Since UCAR/NCAR plan for them to work together and fix all the bugs that previous versions had. So for WRF v4.4.2 WPS v4.4 is its' compliment.

I would first look to see if there is a missing aux input file and then try changing WPS.

I don't want to have you reinstall WRF AND WPS if it is just a missing input.
 
Hi,

At the bottom of your rsl log, you'll find the traceback for your error:

What is on line 1029 of module_radiation_driver.f90? (in phys directory)

You could try a different radiation scheme - typically option 4 (both for sw and lw) is the best to use for chem runs as it is capable of handling aerosol feedback.

As Will said, it's better to use WPS/WRF versions that are similar, but you should especially use a WPS > 4.0 as the vertical coordinate in WRF changed after that version.

The auxinput5 file warning isn't causing the abort, but keep an eye on it when you try to use anthropogenic emissions. At that time, you may want to add these lines:

&time_control
auxinput5_interval_m = 60, ! Anthropogenic emissions input
io_form_auxinput5 = 2,

And of course set these as well:

io_style_emissions = 0,
emiss_inpt_opt = 0, 0,
emiss_opt = 0, 0,

Jordan
 
Hi Will and Jordan, many thanks for your help.

With a colleague we generated the auxinput5 file and the error line Will noticed disappeared, but the model crashed at the same point. Then we retry with WPS 4.3.2 and with different radiation schemes without success. We always got the same error. As Jordan pointed out, Line 1029 from the file "module_radiation_driver.f90" says "CALL wrf_debug (1, 'Top of Radiation Driver')" so we also tried setting the debug_level to 0, which didn't work. The lines just before that one are:

LOGICAL :: proceed_cmaq_sw

logical, save :: firstime = .true.
logical, save :: feedback_restart, direct_sw_feedback

direct_sw_feedback = .false.
feedback_restart = .false.

if(present(explicit_convection)) then
expl_conv=explicit_convection
else
expl_conv=.true.
endif

IF ( ICLOUD == 3 ) THEN
IF (PRESENT(dxkm)) then
gridkm = 1.414*SQRT(dxkm*dxkm + dy*0.001*dy*0.001)
ELSE IF (PRESENT(dx)) then
gridkm = SQRT(dx*0.001*dx*0.001 + dy*0.001*dy*0.001)
endif

if (itimestep .LE. 100) then
WRITE ( wrf_err_message , * ) 'Grid spacing in km ', dx, dy, gridkm
CALL wrf_debug (100, wrf_err_message)
endif
END IF

But I don't know if those are useful.

On the other hand, we noticed that wrf raised the segmentation fault error slightly earlier when we reduced the number of processors assigned, while it ran for a few more lines when we diminished the vertical layers of the domain. We are not able to add more processors (361) to the current simulation as the model is currently splitting each domain in the minimum of 10x10 grids allowed. Could it be a memory issue? The cluster has 128gb of ram per node of which only 40gb seems to be used when running the simulation.

Many thanks again,

Nahuel
 
UPDATE

We have talked with the system administrator and it was a memory problem. They had limited the memory we were allowed to use so there was nothing we could do to run it. Now they have increased the limit and WRF-CHEM seems to be running flawlessly.

Sorry for the inconvenience,

Nahuel
 
UPDATE

We have talked with the system administrator and it was a memory problem. They had limited the memory we were allowed to use so there was nothing we could do to run it. Now they have increased the limit and WRF-CHEM seems to be running flawlessly.

Sorry for the inconvenience,

Nahuel
No problem, sometimes it is the simplest things that cause the problems when dealing with code.
 
Top