Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

Create static file with 15km resolution

Monica

New member
Hi all,

I just face the same problem from this page(https://forum.mmm.ucar.edu/threads/...processing-with-60-3km-resolution.10849/),but for 15km resolution. I should ask you on that public page, but I don't have enough permission to do that. When I create the static file, the log.init_atmosphere.0000.out stop here:



Computing GWDO static fields on the native MPAS mesh
--- Using GMTED2010 terrain dataset for GWDO static fields

I also tried change src/core_init_atmosphere/mpas_init_atm_gwd.F, and then recompile MPAS, but the problem has not been solved.



The various software versions I use are:

pio-2.5.9

pnetcdf-1.12.2

hdf5-1.10.9

zlib-1.2.11

netcdf-c-4.7.4

netcdf-fortran-4.5.3

MPAS-A-7.3

All of the above was compiled by mpiifort.



Also, There are "log.init_atmosphere.0000.out" as shown in the attachment.



Best ,

Monica
 

Attachments

  • log.init_atmosphere.0000.out.txt
    199.8 KB · Views: 9
From your log file, it looks like you may be using 240 MPI tasks. Most importantly, the static field processing only works correctly in parallel in the MPAS v7.x release if it is done using a CVT partition file. I believe we provide a CVT partition file for the 15-km mesh with 16 partitions, but not with 240 partitions. (Note that there is a general graph partition file with 240 partitions supplied with the x1.2621442 mesh, but the partitions in that file are not guaranteed to be convex, which can lead to incorrectly processed static fields). So, it will be important to use 16 MPI tasks, along with the x1.2621442.cvt.part.16 partition file.

Another possible complication, though, is the fact that the computation of the GWDO static fields requires each MPI task to read about 4 GB of data into memory. It's important, then, to make sure that each MPI task has more than 4 GB of memory available, and in some cases, this might mean undersubscribing nodes. For example, NCAR's Cheyenne system has 36 cores per node, but only ~50 GB of usable memory per node. On Cheyenne, I typically assign just 8 MPI tasks to each node when processing static and GWDO fields in parallel.

After getting past the static interpolation step, any graph partition file can be used, and there often isn't a need to undersubscribe nodes.
 
Top