Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

Error in running WRF-Chem (for version 3.9.1.) while using chem_opt=202 though WRF-Chem is running successfully for chem_opt=112 and 201

Howdy Folks:

I hope I didn't join the party too late. I also run into this exact problem using WRF-Chem 4.4.1. To recap:

1) namelist: chem > chem_opt = 202 AND chem > aerchem_onoff = 1
2) fort.67 created (i.e., "ASTEM internal steps exceeded 200" with "Large total lw" in rsl.error files )
3) Segmentation fault

In my case, the model crashes irrespective the value of chem > mozart_ph_diag, or any other flags that Ankan tried. The only sure fire way is to deactivate aerosol chamistry (i.e., aerchem_onoff = 0).

Based on the exchange between Ankan's and Jordan, I have managed to isolate the problem to be in the subroutine glysoa_complex() in module_mosaic_gly.F, which is called in subroutine ASTEM() in module_mosaic_therm.F, despite (again, according to Ankan) them no longer needed from WRF 4.0 onwards. The path of least resistence would be to remove these lines or comment them out.

In module_mosaic_therm.F, locate the subroutine ASTEM(). Scroll down about 70 lines or so to find the following code block:

Code:
! condense inorganic semi-volatile gases hno3, hcl, nh3, and co2
      call ASTEM_semi_volatiles(dtchem) ! semi-implicit + explicit euler
      if (istat_mosaic_fe1 .lt. 0) return

      if (glysoa_param == glysoa_param_simple)  call glysoa_simple(dtchem)
      if (glysoa_param == glysoa_param_complex) call glysoa_complex(dtchem)

and remove/comment out the last two lines (i.e., the if statements containing glysoa_param == ...).

However, the fort.67 (i.e., the "ASTEM internal steps exceeded" messages and the rsl.error warnings "Large total lw") will still happen. This warning message comes from the subroutine ASTEM_semi_volatiles() as seen from the above code block, but is unlrelated to the segmentation fault. The only way I can think of is to increase the maximum number of allowable iterations.

To do this, return to module_mosaic_therm.F and find the subroutine load_mosaic_parameters(). Look for the following lines a few lines below:

Code:
! astem parameters
      nmax_astem      = 200     ! max number of time steps in astem
      alpha_astem     = 0.05        ! choose a value between 0.01 and 1.0

and increase the value of nmax_astem to a larger (integer) value. Personally I don't think it is guaranteed to work, but it is still worth a try (in my case it worked when I increased it from 200 to 500).

After the changes, clean out the build and recompile. The code should at least stop crashing.

Hope that helps, Gerry.

P.S. - I managed to dig a little further in the subroutine glysoa_complex() in module_mosaic_therm.F, and suspect that there is a bug in the Runge-Kutta ODE solver subroutine rk4(), which is called by glysoa_complex() - but not glysoa_simple(), which explains why it does not crash with it. However, I was unable to devote more time to it beyond performing some rudimentary tests. Perhaps I will return to this when I have more time, or another person could pick this up in a more timely fashion find a more complete solution than what is presented above.
I'm just replying to say thank you Gerry! This fixed my problem! I fixed it slightly differently - instead of commenting out the two lines I chose to always use glysoa_simple and it works fine for now.

Thanks for contributing your solution!
 
I had similar issue of "Large total sw optical depth" recently.
The issue is caused by extremely high concentrations at the corner of the domain.
The issue persisted no matter I shift the domain or change to another initiation time.
It appears the issue is related to the emission file I produced.
After re-producing the emission file using another interpolation method (used python instead of NCL), the simulation runs fine.
 
Top