Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

MPAS IO Error: Bad return value from PIO

This post was from a previous version of the WRF&MPAS-A Support Forum. New replies have been disabled and if you have follow up questions related to this post, then please start a new thread from the forum home page.

Yue_Ma

New member
Hi,
I want to try variable-resolution mesh with 60-3km mesh and while interpolating init conditions this problem has arisen.The log file is attached.

P.S. I notice a double-precision build of MPAS-Atmosphere with this grid may produce some fields exceed 4GB using atmosphere core. I also add "io_type" in steams.init_atmosphere for init_atmosphere core but not work.

Code:
<streams>
<immutable_stream name="input"
                  type="input"
                  filename_template="x20.835586.static.nc"
                  input_interval="initial_only" />

<immutable_stream name="output"
                  type="output"
                  io_type="pnetcdf,cdf5"
                  filename_template="x20.835586.init.nc"
                  packages="initial_conds"
                  output_interval="initial_only" />

<immutable_stream name="surface"
                  type="output"
                  filename_template="x20.835586.sfc_update.nc"
                  filename_interval="none"
                  packages="sfc_update"
                  output_interval="86400" />

</streams>
 

Attachments

  • log.init_atmosphere.0000.out.txt
    26.7 KB · Views: 76
Although the CDF5 format allows for variables larger than 4 GB, I think the parallel-netCDF library still requires each MPI task to write less than 2 GB of a given variable at a given time. Could you try running again with, say, 4 MPI tasks to see whether that helps? Using more MPI tasks would mean that each task needs to write less data, hopefully circumventing the 2 GB limit in the parallel-netCDF library.
 
Top