Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

real: error opening wrfinput for writing

This post was from a previous version of the WRF&MPAS-A Support Forum. New replies have been disabled and if you have follow up questions related to this post, then please start a new thread from the forum home page.

marken

New member
I have compiled WRF V4.0.1 from the GIT repo on a Cray XC system without errors, using the INTEL (ftn/icc) Cray XC configure option. I am using the Jan 2000 case from OnLineTutorial to test my setup. WPS geogrid.exe, ungrib.exe, metgrib.exe seem to work fine with the Jan 2000 dataset, but when I run real.exe I get the following error in rsl.out (attached):

back from outsub in open_w_dataset
calling wrf_open_for_write_commit in open_w_dataset
NetCDF error: NetCDF: Not a valid ID
NetCDF error in ext_ncd_open_for_write_commit wrf_io.F90, line 1481
back from wrf_open_for_write_commit in open_w_dataset
-------------- FATAL CALLED ---------------
FATAL CALLED FROM FILE: <stdin> LINE: 697
real: error opening wrfinput for writing

Should the Jan 2000 case work with WRF V4.0.1, is there a better way to check my setup?

Or any guidance for a new user trying to work out where the problem is?

Thanks, Mark
 

Attachments

  • rsl.out.0000.txt
    104.7 KB · Views: 64
Hi Mark,
This error typically means that the file is too large for writing. Since you are running the Jan 2000 case, it shouldn't be a very large file. Can you check whether you have enough disk space in the space you are trying to output the boundary and initial condition files (wrfbdy_d01 and wrfinput_d0*)?
If that doesn't seem to be the problem, can you send me the following:
1) namelist.input
2) one of your met_em* files (if it's too large to attach here, take a look at the home page of this forum for instructions on uploading larger files to our cloud server)
3) in your real.exe running directory, issue:
ls -ls >& ls.txt
and send that ls.txt file
Thanks!
 
Thank you for your reply. The df command shows I have 320GB disk space available where my (0 byte) wrfinput_d01 file is created. I have also uploaded a tar file to the cloud server with the requested files as marken_181026.tar.gz

Thanks again, Mark
 
Mark,
Thanks for sending those. I did a test and did not have a problem producing the wrfinput* files with your namelist and met_em* file. This seems to be a problem on your particular system, and perhaps a problem with your NetCDF paths on the system. Can you check that your paths ($NETCDF and $PATH) are still set correctly, pointing to the correct version of netCDF? If you built netCDF for this compile, but didn't set it in your environment scripts (e.g., .cshrc), then you may not have those set anymore.

In the future, I would also recommend setting debug_level = 0. This was an option that was put in many years back for a specific development purpose and really shouldn't be in the namelist. We actually just removed it from the default namelist beginning in V4.0 because it simply doesn't give any useful information, and print out a lot of junk in the rsl files, making them more difficult to read through.
 
Thanks for checking my input files are okay. I decided to try moving my WRF folder to another file system and real.exe is now working! I can also run wrf.exe and get the wrfout_d01... file.

There seems to be an issue with relative paths on my home file system, as I had some problems with the default relative paths used for WPS. I ended up adding absolute paths to namelist.wps (opt_output_from_{met/geo}grid_path) to get it working, but could not see a similar option for WRF namelist.input.

I am not sure what the difference is between my scratch and home file systems (works on scratch, does not work on home), both are mounted on the compute nodes. Will follow-up with our sys admin team...

Thanks also for the advice on the debug_level option, I have disabled it as suggested.

Thanks again for your help, much appreciated!

Cheers, Mark
 
Hi,
I'm glad that you were able to find a work-around for this! I hope you're able to get your file system figured out for the long-run.
Thanks for letting me know.
 
Top