Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

(RESOLVED) METGRID cannot process global 0.01 deg SST WPS intermediate format

This post was from a previous version of the WRF&MPAS-A Support Forum. New replies have been disabled and if you have follow up questions related to this post, then please start a new thread from the forum home page.

wcheng

New member
Hi

I have a 0.01 deg global SST in WPS intermediate format: 36000 x 17999 grid points.

I read this into metgrid as a constant field. This is too big. metgrid just skips this.

Is there a upper limit on the grid dimension in metgrid? Is there a way to change this?

If I coarsen the grid to every 10 points, then metgrid can read the file.

Thanks in advance for your help!
 
Hi,
Can you attach the namelist.wps file you're using, along with the metgrid.log file? Can you also let me know the size of the constants file? Thanks!
 
Hi Karl

It looks like you are at MMM. My files are on cheyenne:

cheyenne4: /glade/scratch/chengw/joe/wrf/SST:2020-04-27_09_0.01deg (0.01 deg global SST)

cheyenne4: /glade/scratch/chengw/joe/wrf/SST:2020-04-27_09_0.01deg_every_10points (0.01 deg SST sampled every 10 points)

My run directory is in

cheyenne4: /glade/scratch/chengw/joe/wrf/WRF_WPS

and namelist.wps is in cheyenne4: /glade/scratch/chengw/joe/wrf/WRF_WPS

I do a soft link of the above SST* files to SST and use

constants_name = 'SST',

You'll notice rd_intermediate.exe has no effect on the large SST file.
rd_intermediate.exe can read the smaller SST file. The difference is just the size.

The size of the 0.01 deg file is

lat = 17999 ;
lon = 36000

Thanks for your help! - Will

P.S., my UCAR email is C-H-E-N-G-W (remove the -).
 
HI Will,
Thanks for sending those directories. That makes things a lot easier. Are you using serial or parallel processing for metgrid? If you didn't compile with a parallel option (e.g., dmpar), I'd recommend that. I'd also recommend making sure that you have large file support
(setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1) set before compiling WPS. The 0.1 deg SST data file is about 2.5 GB and netcdf has a limitation of 2 GB without large file support. I'm hoping that will help your situation.
 
It should be noted that the WRFIO_NCD_LARGE_FILE_SUPPORT environment variable only affects netCDF files that are written by WRF's I/O API; it does not affect intermediate-format files.

When metgrid is run in parallel, each MPI task still reads in a full copy of each field from the input intermediate file, so I don't think using a "dmpar" build option for the WPS will help.

I'd need to look into the code further, but it may be that using 8-byte record markers for Fortran unformatted reads/writes would resolve this issue; but, this would also require re-writing the intermediate file to use 8-byte record markers.
 
kwerner said:
HI Will,
Thanks for sending those directories. That makes things a lot easier. Are you using serial or parallel processing for metgrid? If you didn't compile with a parallel option (e.g., dmpar), I'd recommend that. I'd also recommend making sure that you have large file support
(setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1) set before compiling WPS. The 0.1 deg SST data file is about 2.5 GB and netcdf has a limitation of 2 GB without large file support. I'm hoping that will help your situation.

Hi Karl: Thanks for looking into it. I used metgrid in serial mode. As Michael pointed out, the WPS intermediate files are
not in netcdf format.

Thanks Michael for the 9-byte record marker suggestion :) I'll try it out.


mgduda said:
It should be noted that the WRFIO_NCD_LARGE_FILE_SUPPORT environment variable only affects netCDF files that are written by WRF's I/O API; it does not affect intermediate-format files.

When metgrid is run in parallel, each MPI task still reads in a full copy of each field from the input intermediate file, so I don't think using a "dmpar" build option for the WPS will help.

I'd need to look into the code further, but it may be that using 8-byte record markers for Fortran unformatted reads/writes would resolve this issue; but, this would also require re-writing the intermediate file to use 8-byte record markers.
 
Hi Karl/Michael

Thanks for your help....I added access='stream' in the write statement in ungrib
and the read statement in metgrid for the WPS intermediate files. That did the trick.

- Will
 
This is an old thread..but are you guys interested in the fix so this gets into the
community version.

Our software engineers think that if this fix gets into the community version, they
don't need to keep track of the upgrades.

This would benefit anyone using a very big WPS Intermediate file.
 
Thanks for the nudge on this. I've used stream access only sparingly, and so I'd like to think through any possible implications of this change. For example, I have a fuzzy (though perhaps incorrect) recollection that different combinations of system+compiler use different sized storage units (for example, one compiler may assume 1-byte storage units, while another may assume 4-byte storage units), which could create headaches when moving intermediate files between systems or when switching compilers between the creation of intermediate files and the metgrid pre-processing stage.
 
Michael

Actually with access='stream', this is like direct access versus sequential access binary
in the original.

I don't think porting between machines is an issue. Some machines may be big or
little endian but the doesn't the compiling in WPS take care of that?
 
I'll take a look when I'm able and see if we could incorporate stream access in a clean way. My current thinking is that there could be a compilation option for the WPS to use access='stream' when reading and writing intermediate files; the default -- at least until the access='stream' option has received broader testing -- would probably be to not use access='stream'. Since any misstep could generate a significant support burden for us, I'd like to proceed cautiously.
 
Top