Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

(RESOLVED) WRF v3.6.1 : ZM_CONV IENTROPY: Tmix did not converge

Arty

Member
EDIT : Problem apparently came from NDOWN's wrfinput file. Configuration works well with REAL's outputs. See below for further details.

Hello,

I finally succeeded compiling WRF/WPS v3.6.1 (librairies and rights problem, someone did it for me...) and then tried to run my ndown process under this version (because I'm having trouble using v3.6.1 wrfout* with 4.2.1 ndown.exe). Despite I almost cloned the namelist from the original run, WRF crashes right at the beginning with this error :

Code:
-------------- FATAL CALLED ---------------
FATAL CALLED FROM FILE:  <stdin>  LINE:     150
**** ZM_CONV IENTROPY: Tmix did not converge ****
-------------------------------------------

Unfortunately, I didn't find much information on that matter ; the only post I found is from 2014 and about CAM. Note that I also tried to reduce timestep from 40sec to 20sec, in vain.

In addition to the attached tar files containing namelist.input and rsl* files, I uploaded on the cloud another WRF3.6.1_inputfiles.tar file, as I can't tell if one of these could be problematic.

Thanks for your help.
 

Attachments

  • config_v3.6.1.tar
    370 KB · Views: 0
Last edited:
Hello there,

I'm coming back on this " ZM_CONV IENTROPY: Tmix did not converge " thing ; still on WRF V3.6. I tried to look in /phys modules but am not sure what is wrong.

Attached are 2 tar files containing rsl* and namelist.input files for 2 configs (with/without schu physics activated) : when schu is activated (shcu1.gz), error message prints :
Code:
tail -n 6 rsl.error.0012
pLCL does not converge and is set to psmin in uwshcu.F90  1.609103392961774E-002                     NaN   101151.953125000
-------------- FATAL CALLED ---------------
FATAL CALLED FROM FILE:  <stdin>  LINE:     150
**** ZM_CONV IENTROPY: Tmix did not converge ****
-------------------------------------------
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 12

Whereas when schu is not activated (shcu0.gz), I don't have any lines starting by "pLCL" before FATAL CALLED.

For now, I'm not sure if it's a problem of LOOPMAX (=100) value in /phys/module_cu_camzm.F; or another variable. I intend to try and increase LOOPMAX when I have required rights to the file. Nevertheless, if you have any other suggestion, I'd be glad to try as well.

Thanks for your help.
 

Attachments

  • shcu1.gz
    103.7 KB · Views: 0
  • shcu0.gz
    47.5 KB · Views: 0
I made another test-run increasing debug parameter. I got some more information in rsl* files (see attached, namelist.input included).

Code:
tail rsl.error.0000
d01 2013-09-01_00:04:40 in camuwpbl
d01 2013-09-01_00:04:40 in camzm_cps
d01 2013-09-01_00:04:40  *** ZM_CONV: IENTROPY: Failed and about to exit, info follows ****
d01 2013-09-01_00:04:40 ZM_CONV: IENTROPY. Details: call#,lchnk,icol= 1  12   1 lat:   0.00 lon:   0.00 P(mb)=    NaN Tfg(K)=    NaN qt(g/kg) =   0.00 qsat(g/kg) =    NaN, s(J/kg) =    NaN
d01 2013-09-01_00:04:40  *** Please report this crash to Po-Lun.Ma@pnnl.gov ***
-------------- FATAL CALLED ---------------
FATAL CALLED FROM FILE:  <stdin>  LINE:     150
**** ZM_CONV IENTROPY: Tmix did not converge ****
-------------------------------------------
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0

Some insights of why there are so many NaNs ?
 

Attachments

  • debug100.gz
    4.4 MB · Views: 0
Arty,
This looks more like a data issue. I am suspicious that your input data might be wrong. Can you run a single domain case with wrfinput and wrfbdy produced by REAL? Hope this case could run successfully. If so, it indicates that at least ZM scheme is not a concern.
Then you can move on to run with ndown. ndown involves a few tedious steps that we have to be cautious to do correctly. You may need to check wrfinput from ndown and make sure the data looks reasonable.
 
Arty,
This looks more like a data issue. I am suspicious that your input data might be wrong. Can you run a single domain case with wrfinput and wrfbdy produced by REAL? Hope this case could run successfully. If so, it indicates that at least ZM scheme is not a concern.
Then you can move on to run with ndown. ndown involves a few tedious steps that we have to be cautious to do correctly. You may need to check wrfinput from ndown and make sure the data looks reasonable.

Thank you for the insight. Indeed, when initiated and forced by REAL outputs, it works—despite the new configuration with ZM cumulus parametrization and CAM radiative parametrization taking way longer to compute, almost sixfold compared to the prior configuration. You've got it right; the problem seems to come from the NDOWN's wrfinput file. Even with only one domain, it didn't work.

I'm also running another test with REAL's wrfinputs for both domains and NDOWN's wrfbdy* files; it seems to run fine for now. Since REAL's wrfinputs appear to do the trick, do you see any inconvenience in using them to initialize the simulation while using NDOWN's wrfbdy* files? I mean, if I allow enough spin-up time, the influence of REAL (CFSR)'s initial conditions should fade out, right?

Thanks for your help.

Note: since my first post in March, 2023, I created new NDOWN's input file and checked some variables of wrfinput via NCVIEW. I didn't come across any strange values. Except for some physics options, I used the same NDOWN's files before with another configuration that ran well.
 
Last edited:
Top