Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

Error in ext_pkg_write_field while using NCEP GFS 0.25 Degree Global Forecast Grids Historical Archive

This post was from a previous version of the WRF&MPAS-A Support Forum. New replies have been disabled and if you have follow up questions related to this post, then please start a new thread from the forum home page.

paganelle76

New member
Hello!
I successfully run metgrid with GFS real-time data downloaded NCEP data server. But once I switch to historic data from https://rda.ucar.edu/datasets/ds084.1/index.html#!description I start getting the error

ERROR: Error in ext_pkg_write_field

It looks like some data fields in NCEP GFS 0.25 Degree Global Forecast Grids Historical Archive are missing but I have no clue to fix it.

The list of files I'm trying to preprocess is:
2020/20200624/gfs.0p25.2020062400.f000.grib2 \
[... ]
2020/20200624/gfs.0p25.2020062400.f024.grib2 \

Should I use different sets of files for historic data?
 

Attachments

  • metgrid.log
    25.6 KB · Views: 32
Which version of WPS did you run? I suppose you use Vtable.GFS to ungrib the data. Please let me know if I am wrong,
Can you send me your namelist.wps to take a look?
 
Hello!

I use WRF 4.2.1 But, actually, I found a problem by myself with the help of Panoply.

I run WRF on Oracle Virtual Box 6.1 with dynamical allocated disk of up to 100 GB. I noticed that Panoply is able to open just downloaded grib2 historic files, but once I move them into the DATA folder - Panoply can not find any records in that copy. If I directly download GFS files into the DATA folder, metgrid does the job, but WRF is unable to open those results. So, something goes wrong during the copy of large files when the disk size expands beyond ~64GB (while I was working with real-time GFS files, the size of the disk was ~55 GB and I have not noticed any problems).

I reinstalled Ubuntu/WRF on fixed size storage with the same volume of 100 GB and was able to process historic GFS files. Therefore it looks like the problem was in VM machine and/or the way it expands large volumes.
 
Top