Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

Problem in wrfxtrm_* files after ndown.exe

This post was from a previous version of the WRF&MPAS-A Support Forum. New replies have been disabled and if you have follow up questions related to this post, then please start a new thread from the forum home page.


New member
Hi all,

I am using WRF for regional climate modeling and with the option to produce wrfxtrm_* files at 6-hour intervals.
First I had produced a continuous 5-yr simulation using ERA5 as ICBC. Then, I further downscaled it to a finer grid using ndown.exe with daily reinitialization (i.e., a 30-hr forecast starting at 18 UTC the previous day, saving data only for the day in question).
But the values in the wrfxtrm_* files don't look right (see figure below for 6-hourly T2MAX values in a month at an arbitrary grid point).


I attached an example namelist.input together with a correct ( and a wrong ( wrfxtrm file.

Could anybody help with this issue?

Best regards,


  • namelist.input
    7.9 KB · Views: 25
    2.4 MB · Views: 19
    2.7 MB · Views: 20
Update: the problem appears to be related to the adaptive time step as it vanishes with a fixed dt=30 s. Moreover, the issue is present using either WRF4.2 or WRF4.3.
Hi Akos,
I did a test using my own input and made sure to turn on adaptive time-step, outputting diagnostic options, and ran ndown, but I'm not seeing the issue you are. In the wrfxtrm* files you sent, I do see that for hour 12, the range for T2MAX is -6.6907 to 20.3664, which certainly doesn't seem correct. Can you let me know the steps you are taking to run ndown?
Hi kwerner,

thank you for the quick answer. The procedure is the following:

1. Initially I had produced a continuous 5.5-year simulation using ERA5 ICBC with dx=50 km.
2. I run real.exe once again for two domains (50 km and 10 km). After real.exe I remove wrfinput_d01, wrflowinp_d01, and wrfbdy_d01. I rename wrfinput_d02 to wrfndi_d02. I rename wrflowinp_d02 to wrflowinp_d01 (so SST can be updated 6-hourly in the nest).
3. I run ndown.exe for two domains. I rename wrfinput_d02 to wrfinput_d01. I rename wrfbdy_d02 to wrfbdy_d01. I remove wrfndi_d02.
4. Then I run the 10-km simulation with 6-hourly SST update and 3-hourly boundary update.

I attached a namelist.input for all three phases (real.exe, ndown.exe, wrf.exe).


  • namelist_ndown.input
    9 KB · Views: 19
  • namelist_real.input
    9 KB · Views: 21
  • namelist_wrf.input
    8 KB · Views: 20
Thank you for providing that information. From what I can tell, you're doing everything correctly, yet I still cannot repeat the issue. As a final attempt, can you share your met_em* files with me so I can try to run with your input? If the files are too large, take a look at the home page of this forum for information on sending large files. Thanks!
I uploaded the met_em* files as requested:

Just to make sure, I also uploaded the wrfout* files from which downscaling with ndown.exe is performed:

Please make sure to check all temperature variables (e.g. T2MIN, T2MAX, T2MEAN) in the wrfxtrm* files as not all of them are necessarily corrupted.

Thank you for your investigation on this issue.
Thanks for sending those. Unfortunately I'm unable to get anything to run with those files because of a weird error (LANDUSE OUTSIDE RANGE). However, I am able to produce my own data to match your setup exactly (except I'm using GFS input). I'm using your exact namelist files and following your steps and I'm still unable to see any issues with the T fields, even when looking at all times. I do notice that your data have 98 landuse categories. I wonder if this is causing the problem. Can you try to do this with just the default landuse (MODIS) during WPS? You will need to set
geog_data_res = 'default','default',
in namelist.wps, then rerun geogrid and metgrid, then proceed with the real/wrf/ndown steps.
That must be because we run WRF with a self-adopted version of the CORINE land use dataset. For that, we use modified LANDUSE.TBL, MPTABLE.TBL, and VEGPARM.TBL. Sorry for not mentioning this earlier.
However, I reran the case with the default 28-category land use as you suggested, but the problem is still present. Please see T2MAX in the attached wrfxtrm* file (the values are around 500 K).
I also uploaded the met_em* files with the original land use. Maybe you will be able to use these. Please download from here:

Thank you again for looking into this issue.


    2.6 MB · Views: 18
Thank you so much for testing that out and for sending the new files. I am finally able to reproduce the issue you're seeing. I haven't been able to figure out why it's happening yet, but it is definitely related to the ndown program. I just wanted to give you that update. I'll continue looking and hopefully get back to you soon.
I apologize for the delay, but I've done a lot of testing to try to determine the cause. This is what I've determined causes it: using adaptive time stepping + e_vert = 61 + your particular input data. I've tested a number of different combinations, and then recreated the case with my own input (using CFSR data), and nothing else gives the weird values - only when using the combination I mentioned above. This unfortunately means that trying to solve this would be an extremely time-consuming task, and we may never understand it since it isn't happening with other input files. I did find that when I use fewer vertical levels, I am able to run your case without issues. I didn't play around to find the threshold, but 45 levels worked. Maybe that's something you could try, and maybe it's possible to use more, but just not 61. I'm sorry - I wish I had some better news on this!