Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

Unrealistic TSK in WRF-SLUCM

jan1203

New member
Hello,

I have a problem with simulation crashes in WRF version 4.4.2 with SLUCM, which typically occurs in summer. The only relevant output in rsl files is exceedence of vertical cfl condition in some grid-points, but neither decrease of time-step nor epssm=0.4 solves the problem. When I investigated more, I noticed that TSK array is somehow strange in the last hour before crash - in some urban areas are temperatures higher than 340 K (!) - see the attached figure (domain in central Europe). Also related arrays as sensible or latent heat fluxes are affected to unrealistic values.

The same occurs in version 4.5.0, the last version that run without errors (and with realistic TSK) is 4.3.3. Namelist with setting is attached.

Have somebody similar experience or does somebody know how to fix it?

Thanks, Jan
 

Attachments

  • namelist.input
    5.3 KB · Views: 19
  • ncview.TSK.ps
    742.2 KB · Views: 15
Hi Jan,
Is your domain covering complex terrain - especially near the boundaries? You may also need to try setting smooth_cg_topo = .true. in the &domains section of the namelist, prior to running real, to get rid of the CFL errors.

When you ran the case with V4.3.3 and it worked okay, were you using the exact same setup (i.e. identical namelist.input, same domain, dates, times, and input data)?
 
Thanks for the answer. Unfortunately, your proposed option doesn't work for me. And in any case, problematic grid-points are usually far from boundaries and are not related to mountainous regions.

Yes, the domain, met_em files, namelist and all setting are the same.

Not only temperature, but overall energy balance is strange (before crash) - the members have values > 1000 W/m2, especially QFX (very opposite - great deposition on surface) and LH ~ -12000 W/m2 (!).
 
Last edited:
Hi Jan,
I spoke to our physics specialist. They are curious what HFX does leading up to the large TSK values - does it change suddenly or grow steadily?
 
Hi Jan,
I spoke to our physics specialist. They are curious what HFX does leading up to the large TSK values - does it change suddenly or grow steadily?
Hi Kwerner, my case was similar to his, in my case, in wrfout_d04_2019-08-04_04:00:00 the HFX's range was from -54 to 302 , but in wrfout_d04_2019-08-04_05:00:00 the range change suddenly from -59 to 2618.
 
One other thing - are you using standard inputs, code, and tables? Can you also let me know how far into the simulation it crashes? If you create a restart file to start shortly before the crash, and start the simulation from that point, does it still crash? If so, I would like to try to recreate the issue here. For that, I'll need your wrfbdy file and your wrfrst file for that time. Those files will likely be too large to attach, so take a look at the home page of this forum for instructions on sharing large files.
 
One other thing - are you using standard inputs, code, and tables? Can you also let me know how far into the simulation it crashes? If you create a restart file to start shortly before the crash, and start the simulation from that point, does it still crash? If so, I would like to try to recreate the issue here. For that, I'll need your wrfbdy file and your wrfrst file for that time. Those files will likely be too large to attach, so take a look at the home page of this forum for instructions on sharing large files.
Yes, I used the tandard inputs, code, and tables. About 1-5 hours it would crash. How short would you want? About one hour ago or one minute ago?
 
If it's crashing within 1-5 hours of simulation time (not actual clock time), then it's okay to just send the wrfinput/wrfbdy files. I was concerned because your namelist is set up for a 30 day simulation, which would be a really long time for me to try to reproduce.
 
In my case, the simulation crashed after app. 19 days.
I could try to make a restart run beginning before the crash point.
 
Ok, in the link below there is wrfrst file for date 2016/06/19, together with wrfbdy, wrflowinp and namelist. The crash of simulation was 19/06/2016 about 21:15 of model time.


with password: WRF2023+chem
 
Hi,
Thanks for sharing those files. Using your namelist.input file, along with your wrfbdy_d01, wrflowinp_d01, and your wrfrst* file, I ran a test using both WRFV4.5 and WRFV4.4.2. They both ran to completion without any errors. These are my thoughts:

1) How many processors are you using? For my simulation, I used 144 processors. It could be possible that you need to increase or decrease the number you're using.
2) I notice above you mention "chem." Are you, by chance, running a wrf-chem test? If so, this may be a problem that is specific to the WRF-chem part of the simulation, and I would advise that you post the issue in that section of this forum, so that hopefully someone on that team will be able to help.
3) Did you modify the basic WRF code prior to compiling? If so, it's possible those modifications could be causing the problem.
 
Hi, thanks for answer. That's interesting you are running the simulation without any errors.

1/ I was using 24 processors, but if I use 16 or 32, I still have the same error in the same model time
2/ No it is simple WRF. Even the code is nearly the same, I have WRF_CHEM=0 in configure.wrf file. Is it enough?
3/ I made no changes in the model code
 
I tried 144 processors, also 192 processors, but the result is still the same - the crash at 21:15 of model time.
 
I know you sent me the wrfrst* file to use for my test simulation, but did you also test starting with the restart file? If so, does it still crash at the same time for you?
 
Yes, the tests with different numbers of processors were performed as a restart runs using the wrfrst file. Independently to restart/direct run, the simulation crashes in the same day (19.6.2016), the only difference is minor change of the time - 21:15 (restart run) x 21:30 (direct run from 1.6.)
 
Hmm, since it runs fine for me, I'm out of ideas. The only other thing I would suggest is to maybe recompile the code, using a clean/pristine version of code (in a different directory - so as to not overwrite your current wrf directory) and try it there. I know that sounds silly, but occasionally this resolves issues.
 
Hi, @jan1203,

I also got some issue with UCM, I changed the TS_SCHEME in URBPARM.TBL from 1 (4-layer model) to 2 (force_restore).
I wonder will this change can fix your problem?
 
Hi,
yes, it really solves the problem that I have - the simulation finished correctly, TSK+HFX+LH, etc. are reasonable.
Many thanks!
Jan
 
Top