Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

WRFV4.3 ndown error: Older v3 input data detected

This post was from a previous version of the WRF&MPAS-A Support Forum. New replies have been disabled and if you have follow up questions related to this post, then please start a new thread from the forum home page.

WesternWRF

New member
I'm running ndown using V4.3 and have encountered a vague warning message that has popped up when running WRF for the inner domain after successful ndown steps.

---- WARNING : Older v3 input data detected
-------------- FATAL CALLED ---------------
FATAL CALLED FROM FILE: <stdin> LINE: 640
---- Error : Cannot use moist theta option with old data

I am generating all of the WPS and input files using the V4.3 package, which I confirmed in the headers of each file although it seems this error has nothing to do with actually running with older data and something to do with a bug in ndown that is flagged by wrf.exe when running the inner domain's data checks.

Searching this error message returns a few discussions on this being a documented issue when using ndown in V4.0-V4.1 with most of the interaction with WRF devs coming in threads where the user was performing vertical refinement between nests but no solutions proposed other than stating the problem was fixed for subsequent versions of V4.1.3+

Github discussions on the vertical refinement issues with ndown:
https://github.com/wrf-model/WRF/pull/901
https://github.com/wrf-model/WRF/issues/917

WRF Users forum threads:
User running vertical refinement with V4.1.3
https://forum.mmm.ucar.edu/phpBB3/viewtopic.php?t=8727

User running WRFv4.2.1
https://forum.mmm.ucar.edu/phpBB3/viewtopic.php?f=91&t=10197

User running WRFV4.2 and no vertical refinement:
https://forum.wrfforum.com/viewtopic.php?f=6&t=11825&sid=250b82cf7a8e3452db7a298899517410

User running WRFV4.1
https://forum.wrfforum.com/viewtopic.php?f=7&t=11255

I am not performing vertical refinement for my inner domain.
I have tried setting use_theta_m = 0 to turn off the moist theta option with no luck although doing this does stop the old data error but immediately segfaults at time=0 after the initial wrfout file is created but before integration begins.
I have tried setting force_use_old_data = .true. with no luck (even though I'm not using old data).

Attached is the namelist before ndown.exe and before wrf.exe are run as well as the rsl.error.0000 generated when running wrf.exe. I've also attached the met_em files if someone wants to try and replicate. Successful completions of real.exe, ndown.exe occur and all file and namelist changes recommended in the ndown documentation have been followed to the best of my knowledge.

I have been successful in getting past V3.9 ndown simulations to run, but those were a different forcing dataset (GFS vs CFSv2). I had previously run into issues with the vertical levels in V4.3 and CFSv2, which is why I have the auto_levels_opt = 1. The default of auto_levels_opt =2 was returning errors on not enough vertical levels to stretch to p_top, even though 40 requested levels was sufficiently large based on a search on that error and WRF dev advice to solve.

============================================================

Snippet of namelist.input (attached as namelist.input_real) before real.exe is run:
&time_control
run_days = 1,
run_hours = 0,
run_minutes = 0,
run_seconds = 0,
start_year = 2021, 2021,
start_month = 08, 08,
start_day = 25, 25,
start_hour = 06, 06,
end_year = 2021, 2021,
end_month = 08, 08,
end_day = 26, 26,
end_hour = 06, 06,
interval_seconds = 21600
input_from_file = .true.,.true.,
history_interval = 180, 60,
frames_per_outfile = 1, 1,
restart = .false.,
restart_interval = 2880,
io_form_history = 2
io_form_restart = 2
io_form_input = 2
io_form_boundary = 2
/

&domains
time_step = 30,
time_step_fract_num = 0,
time_step_fract_den = 1,
max_dom = 2,
e_we = 200, 233,
e_sn = 240, 281,
e_vert = 40, 40,
p_top_requested = 5000,
num_metgrid_levels = 38,
num_metgrid_soil_levels = 4,
dx = 12000, 3000,
dy = 12000, 3000,
grid_id = 1, 2,
parent_id = 0, 1,
i_parent_start = 1, 90,
j_parent_start = 1, 100,
parent_grid_ratio = 1, 4,
parent_time_step_ratio = 1, 4,
dzstretch_s = 1.1
feedback = 1,
smooth_option = 0
auto_levels_opt = 1,
/

&bdy_control
spec_bdy_width = 5,
spec_zone = 1,
relax_zone = 9,
specified = .true.
/



======================================================================

Changes made to namelist after real.exe, but before ndown

interval_seconds = 10800
io_form_auxinput2 = 2

======================================================================

Changes made to namelist after ndown and before WRF
time_step = 10,

max_dom = 1,
e_we = 233,
e_sn = 281,
e_vert = 40,
dx = 3000,
dy = 3000,

have_bcs_moist = .true.
have_bcs_scalar = .true.
 

Attachments

  • met_em.d01.2021-08-25_06%3A00%3A00.nc
    29.9 MB · Views: 26
  • met_em.d01.2021-08-25_12%3A00%3A00.nc
    29.6 MB · Views: 29
  • met_em.d01.2021-08-25_18%3A00%3A00.nc
    29.5 MB · Views: 32
  • met_em.d01.2021-08-26_00%3A00%3A00.nc
    29.5 MB · Views: 30
  • met_em.d01.2021-08-26_06%3A00%3A00.nc
    29.5 MB · Views: 30
  • namelist.input_prendown.txt
    4 KB · Views: 34
  • namelist.input_real.txt
    3.8 KB · Views: 35
  • namelist.input_wrf.txt
    3.6 KB · Views: 32
  • rsl.error.0000.txt
    1.7 KB · Views: 35
Hi,
Can you try to attach the met_em.d02* file(s), as well? If they are too large to attach, see the home page of this forum for information on sharing large files. Thanks!
 
Sorry about that and thanks for looking into this for me. Completely missed those. I'm running this all in an operational mode so a cron cleanup job deleted the directories for the model runtime I was linking in the first post. I'm including new met_em files for the same domain files, but for the recent model initialization I have on disk (I'm copying into a safe directory for now to help troubleshoot here). Everything in the namelist will be the same, just with the more recent dates matching the new met_em files I'm attaching.
 

Attachments

  • met_em.d02.2021-09-01_18%3A00%3A00.nc
    69.6 MB · Views: 33
  • met_em.d02.2021-09-02_00%3A00%3A00.nc
    69.4 MB · Views: 34
  • met_em.d02.2021-09-02_06%3A00%3A00.nc
    69 MB · Views: 31
  • met_em.d02.2021-09-01_06%3A00%3A00.nc
    70.1 MB · Views: 32
  • met_em.d02.2021-09-01_12%3A00%3A00.nc
    69.1 MB · Views: 26
  • met_em.d01.2021-09-02_00%3A00%3A00.nc
    29.8 MB · Views: 27
  • met_em.d01.2021-09-02_06%3A00%3A00.nc
    29.7 MB · Views: 29
  • met_em.d01.2021-09-01_06%3A00%3A00.nc
    30.1 MB · Views: 30
  • met_em.d01.2021-09-01_12%3A00%3A00.nc
    29.9 MB · Views: 30
  • met_em.d01.2021-09-01_18%3A00%3A00.nc
    29.8 MB · Views: 30
Hi,
I'm not running into the same error you're seeing. I'm using WRFV4.3, and these are the steps I take. (*Note, I did have to modify the following settings in the namelist due to the met_em* files you sent: e_we (for d02) = 331; e_sn (for d02) = 403; dx/dy (for d02) = 2000; parent_grid_ratio = 1, 6; parent_time_step_ratio = 1, 6)

I only set this up for 6 hours since you said yours fails immediately.

1) I use your met_em* files to run real.exe for both domains, using the namelist you sent, called namelist.input_real.
2) mv wrfinput_d02 wrfndi_d02, run ndown.exe, using namelist.input_prendown
3) mv wrfinput_d02 wrfinput_d01; mv wrfbdy_d02 wrfbdy_d01; run wrf.exe, using namelist.input_wrf

and everything runs without any problems. It's not possible that your exectuables are linked to an older version of WRF, is it? Do you have any other files in your running directory that may be causing an issue (older met_em* files, older wrfout* files, etc.)?
 
I apologize for the delay in responding but the new semester and workload has me juggling this in the margins.

I've spent the last 3 weeks slowly tracking down the source of the issue and have come to what I believe is the root of the problem.

After confirming all of my executables were V4.3 and correctly pointed to in the run directories, I went through a long re-compile test process on our HPC thinking I had something built incorrectly in the depths of the configure file. A clean build did eventually solve the problem and I was able to run the NDOWN process and subsequent d02 domain to completion.

However, the original d02 problem re-emerged after I trimmed the registry down to get rid of the plethora of variables I did not need in the wrfout files. This leads me to believe that the original issue is in missing variables in the wrfout_d01 files that NDOWN needs to generate the wrfndi and boundary condition files for d02. Note that NDOWN did not produce an error and stated a successful completion.

I'm hoping someone can comment on how to determine what variables I need to keep in the wrfout files. Because we are running in an operational setting, file size is a concern and there are many many variables that are not needed. Right now I'm trying to test turning off/on registry changes to specific physics packages individually to try and find what is breaking d02, but the hour compile time each time to test is not a very efficient means of finding what NDOWN needs to have kept and what it doesn't.

In the future, I think it might be a good idea to add a statement to the generic error message I listed in my initial post that recommends users check their registry as this is not an issue with using older data from a past version of WRF. Even better would be an error message in NDOWN that alerts that not all needed variables in the d01 files are present and the correct in the registry.
 
I'm glad you were able to find a solution, and thank you for the suggestion about the error message. We will look into that. As for your question regarding the necessary variables needed for Ndown, take a look at this previous post to see if it answers your question.
 
Top