Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

Issue in creating static file from 3km Uniform mesh

This post was from a previous version of the WRF&MPAS-A Support Forum. New replies have been disabled and if you have follow up questions related to this post, then please start a new thread from the forum home page.

EMMANUEL

Member
I have submitted a job in serial to create a static file using 3km uniform mesh, however, it's taking a bit longer time whereas for 240km it takes short time.

I ran with another uniform mesh grid it creates a static file without an error. However, for 3km it is taking a long time.
My issue is the logout file is just showing up this status

Using default double-precision reals

Reading namelist from file namelist.init_atmosphere
Reading streams configuration from file streams.init_atmosphere
Found mesh stream with filename template x1.65536002.grid.nc
Using default io_type for mesh stream
** Attempting to bootstrap MPAS framework using stream: input
Bootstrapping framework with mesh fields from input file 'x1.65536002.grid.nc'

Is it usual that it is taking a long time or should I submit in parallel?
can you kindly suggest this?
 
I think you may need to run the static processing step in parallel in order to get the job through in a reasonable amount of time. The 3-km mesh download includes a CVT partition file for 256 MPI tasks, so you'll need to use 256 MPI tasks specifically for the static interpolation step (subsequent steps can use regular Metis graph partition files).

Please refer to the "Important notes for dense meshes" on the mesh download page for other details of interpolating static fields in parallel.
 
Thank you for the suggestion I have tried to submit job in parallel using 256 MPI task however there is an error indicated as
ERROR: ****************************************************************
ERROR: Error: Interpolation of static fields does not work in parallel.
ERROR: Please run the static_interp step using only a single MPI task.
CRITICAL ERROR: ****************************************************************


I commented in line 217-222 (src/core_init_atmosphere/mpas_init_atm_cases.F) as suggested in site of MPAS. Yet there is same error as mentioned above

The lines I commented


!if (domain % dminfo % nprocs > 1) then
!! call mpas_log_write('****************************************************************', messageType=MPAS_LOG_ERR)
!! call mpas_log_write('Error: Interpolation of static fields does not work in parallel.', messageType=MPAS_LOG_ERR)
!! call mpas_log_write('Please run the static_interp step using only a single MPI task.', messageType=MPAS_LOG_ERR)
!!call mpas_log_write('****************************************************************', messageType=MPAS_LOG_CRIT)
!!end if

Can you suggest on this
 
Have you recompiled the init_atmosphere_model program after making the source code changes?
 
I have compiled and rerun the init_atmosphere_model but I am getting an error as. In the mean time I ran
keeping this
&dimensions
config_nvertlevels = 1
config_nsoillevels = 1
config_nfglevels = 1
config_nfgsoillevels = 1
/
&vertical_grid
config_ztop = 30000.0
config_nsmterrain = 1
config_smooth_surfaces = true
config_dzmin = 0.3
config_nsm = 30
config_tc_vertical_grid = true
config_blend_bdy_terrain = false
/
&interpolation_control
config_extrap_airtemp = 'lapse-rate' ***************************also I tried with 'linear'

&preproc_stages
config_static_interp = true
config_native_gwd_static = false
config_vertical_grid = false
config_met_interp = false
config_input_sst = false
config_frac_seaice = false

And

<streams>
<immutable_stream name="input"
type="input"
precision="single"
filename_template="x1.65536002.grid.nc"
input_interval="initial_only" />

<immutable_stream name="output"
type="output"
filename_template="x1.65536002.static.nc"
packages="initial_conds"
clobber_mode="overwrite"
output_interval="initial_only" />

</streams>


I ran it using 256MPI task static file is created as " x1.65536002.static.nc" but of 0 size
In log.out.file

albedo_modis/00001-01200.02401-03600
albedo_modis/01201-02400.02401-03600
albedo_modis/02401-03600.02401-03600
albedo_modis/03601-04800.02401-03600
albedo_modis/04801-06000.02401-03600
albedo_modis/06001-07200.02401-03600
--- end interpolate ALBEDO12M
Using option 'linear' for vertical extrapolation of temperature
ERROR: MPAS IO Error: Bad return value from PIO
ERROR: MPAS IO Error: Bad return value from PIO
ERROR: MPAS IO Error: Bad return value from PIO

********************************************************
Finished running the init_atmosphere core
********************************************************


Timer information:
Globals are computed across all threads and processors

Columns:
total time: Global max of accumulated time spent in timer
calls: Total number of times this timer was started / stopped.
min: Global min of time spent in a single start / stop
max: Global max of time spent in a single start / stop
avg: Global max of average time spent in a single start / stop
pct_tot: Percent of the timer at level 1
pct_par: Percent of the parent timer (one level up)
par_eff: Parallel efficiency, global average total time / global max total time


timer_name total calls min max avg pct_tot pct_par par_eff
1 total time 2830.57533 1 2830.41164 2830.57533 2830.55240 100.00 0.00 1.00
2 initialize 801.14943 1 800.28753 801.14943 800.66630 28.30 28.30 1.00
"log.init_atmosphere.0000.out" 3522L, 250901C 3522,3 Bot

************************************************************************************************************************************************


In log.error file

ERROR: MPAS IO Error: Bad return value from PIO
ERROR: MPAS IO Error: Bad return value from PIO
ERROR: MPAS IO Error: Bad return value from PIO


can you kindly suggest me
 
The "Bad return value from PIO" messages may be an indication that there are fields to be written to the output file (in this case, the x1.65536002.static.nc file) that violate the constraints of the default output file format. I see that the notes under "Important notes for dense meshes" on the mesh download page indicate that it is necessary to set the io_type attribute when writing model output and creating model initial conditions, and I think we need to update that note to indicate that the same is true when creating static files.

Can you try adding
Code:
io_type="pnetcdf,cdf5"
to the definition of your "output" stream, so that it is defined as
Code:
<immutable_stream name="output"
                  type="output"
                  io_type="pnetcdf,cdf5"
                  filename_template="x1.65536002.static.nc"
                  packages="initial_conds"
                  clobber_mode="overwrite"
                  output_interval="initial_only" />
?
 
I'm glad to have helped!

When I get a chance I'll make some updates to the notes on the mesh download page based on our discussion here (specifically, regarding the need to recompile the init_atmosphere_model program and the need to use io_type="pnetcdf,cdf5" even for the static output file).
 
for next steps
I have set config_native_gwdo_static = true and all other pre-processing stages to "false" and used x1.65536002.static.nc according to the first steps in streams. init for processing the GWDO fields with 16 nodes and ppn=1.

<immutable_stream name="input"
type="input"
precision="single"
filename_template="x1.65536002.static.nc"
input_interval="initial_only" />

<immutable_stream name="output"
type="output"
io_type="pnetcdf,cdf5"
filename_template="x1.65536002.static_with_gwdo.nc"
packages="initial_conds"
clobber_mode="overwrite"
output_interval="initial_only" />


In the meantime, I have used "x1.65536002.graph.info.part.16" instead of "x1.65536002.cvt.part.256" for creating a static file with GWDO (yet to). Is it fine? As I have doubt that if GWDO is created with 16 partitions than my main atmosphere.model run would also be done using fewer nodes(16) and ppn(1) which would take a longer time than 16*16(nodes & ppn) in the case of CVT.part.256 or I can ignore and submit jobs with nodes 16 and ppn=16?

My issue is it's taking a long time. compared to the first step. Is it due to fewer nodes or should I create "x1.65536002.cvt.part." with 16 partitions?
Or is it fine to skip this step and proceed with the next step for creating an init file with static file without GWDO? Or should I make graph.info file with 256 partition ?


****STATUS of job
Using default double-precision reals

Reading namelist from file namelist.init_atmosphere
Reading streams configuration from file streams.init_atmosphere
Found mesh stream with filename template x1.65536002.static.nc
Using default io_type for mesh stream
** Attempting to bootstrap MPAS framework using stream: input
Bootstrapping framework with mesh fields from input file 'x1.65536002.static.nc'
 
EMMANUEL said:
for next steps
I have set config_native_gwdo_static = true and all other pre-processing stages to "false" and used x1.65536002.static.nc according to the first steps in streams. init for processing the GWDO fields with 16 nodes and ppn=1.
It looks like you've created a separate thread for this question here, and if I'm not mistaken you had also created and deleted a post about the same issue in this thread. I'll follow up in the new thread you've created, but in future, please do not create multiple posts about the same question (or delete posts once you've created them) -- it makes it more difficult for those of us monitoring the forum to know where the "current version" of your question is found.
 
Top