Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

3-km mesh

This post was from a previous version of the WRF&MPAS-A Support Forum. New replies have been disabled and if you have follow up questions related to this post, then please start a new thread from the forum home page.

wcheng

New member
Hi

Where can I find the x1.*graph.info* and x1.*.grid.* files for the 3-km
(uniform) mesh?

I don't see it in

https://mpas-dev.github.io/atmosphere/atmosphere_meshes.html

Thanks!
 
I've just added a link to the 3-km mesh files on the MPAS-Atmosphere mesh download page. The download only includes a few mesh partition files, so it will probably be necessary to install Metis (http://glaros.dtc.umn.edu/gkhome/metis/metis/overview) to create partition files appropriate to your machine. Here's an example command:
Code:
gpmetis -minconn -contig -niter=200 x1.65536002.graph.info 16384
 
Michael: thanks again for providing the 3-km mesh. The file provided ( x1.65536002.cvt.part.256) is configured for 256 cores. It seems that on our system, 256 cores is insufficient to generate the mesh. Could you please provide a file for more cores? For the 10km mesh, the file provided is x1.5898242.cvt.part.64, I can only make this work by setting

config_native_gwd_static = false

Thanks!
 
Processing the static fields (especially the GWDO fields) for these larger meshes can be a bit tricky. I've found on Cheyenne, for example, that I need to undersubscribe nodes to gain access to enough aggregate memory, and I ended up processing terrain, land use, etc. separately from the GWDO fields. From memory, here's what I tried, and what might work on your system:

1) First, set config_static_interp = true and all other pre-processing stages to "false". Then, try running with, say, 16 nodes with 16 MPI ranks per node to produce a static file without GWDO fields. Here, you'll need to use the x1.65536002.cvt.part.256 file to ensure correct interpolation of the static fields in parallel.

2) Next, set config_native_gwdo_static = true and all other pre-processing stages to "false". You'll also need to set the input file in the "streams.atmosphere" file to the name of the static file produced in (1) and the output file to some other name (e.g., "static_with_gwdo.nc"). Then, try processing the GWDO fields with 16 nodes and just one or two MPI ranks per node.

The GWDO processing is not memory efficient right now: each MPI rank reads in the entire 30-arc-second global terrain dataset, which is about 3.7 GB when typecast into 32-bit reals. So, to avoid exceeding the amount of memory on any node, you'll need to use just a few MPI ranks per node. Unlike the terrain, land use, etc. fields, the GWDO fields don't require any special CVT partition file to be processed in parallel, so you can just use Metis to create an x1.65536002.graph.info.part.16 file as needed.
 
Michael: I did as you suggested in Step 1 (I also did this earlier too but re-trying to get logs). I got some error message that I suspect that it has to do with memory. I am attaching my logs.

I used the same executable that was able to generate the 10-km mesh. So if it's not memory then I am not sure what it is.

Thanks again for your help!
 

Attachments

  • namelist.init_atmosphere.txt
    1.2 KB · Views: 63
  • streams.init_atmosphere.txt
    536 bytes · Views: 60
  • log.init_atmosphere.0000.out.txt
    2.9 KB · Views: 61
  • test.out.txt
    233.5 KB · Views: 60
Sorry -- I forgot one important detail to reduce memory usage: you'll also need to set all dimensions to 1 in the &dimensions namelist:
Code:
&dimensions
    config_nvertlevels = 1
    config_nsoillevels = 1
    config_nfglevels = 1
    config_nfgsoillevels = 1
/
Otherwise, the MPAS infrastructure will allocate quite a few full, 3-d fields (e.g., theta, u, rho, etc.) that are not needed when processing static and GWDO fields.

Of course, after the static and GWDO fields have been processed, you'll set these dimensions to their final values before producing the full initial conditions file with many more nodes and a regular Metis partition file.
 
Thanks for your suggestion, Michael. I was able to get Step 1 to work. When I got to Step 2 (I used 16 nodes but 1 core each), I got segmentation fault:

forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image PC Routine Line Source
init_atmosphere_m 000000000092CC04 for__signal_handl Unknown Unknown
libpthread-2.17.s 00002B97EAD97100 Unknown Unknown Unknown
init_atmosphere_m 00000000004FA116 Unknown Unknown Unknown
init_atmosphere_m 00000000004AC20D Unknown Unknown Unknown
init_atmosphere_m 000000000044B219 Unknown Unknown Unknown
init_atmosphere_m 000000000040ED15 Unknown Unknown Unknown
init_atmosphere_m 000000000040ECAE Unknown Unknown Unknown
init_atmosphere_m 000000000040EC5E Unknown Unknown Unknown
libc-2.17.so 00002B97EB2C7B15 __libc_start_main Unknown Unknown
init_atmosphere_m 000000000040EB69 Unknown Unknown Unknown

I may try to run with what I have for now. Thanks again for your help!
 
Testing without the GWDO fields should be fine if you just set config_gwdo_scheme = 'off' in your &physics namelist. The parameterization of gravity wave drag by orogoraphy may not be as important on a 3-km mesh, anyway. Any time you'd like to follow-up on the GWDO field processing, though, feel free to post here!
 
Michael: thanks for your suggestion...that's what I figured. The GWD is probably not needed for the 3 km mesh.
 
In both cases in streams.init_atmosphere what would be the clobber mode? will it be options=overwrite or append?

mgduda said:
Processing the static fields (especially the GWDO fields) for these larger meshes can be a bit tricky. I've found on Cheyenne, for example, that I need to undersubscribe nodes to gain access to enough aggregate memory, and I ended up processing terrain, land use, etc. separately from the GWDO fields. From memory, here's what I tried, and what might work on your system:

1) First, set config_static_interp = true and all other pre-processing stages to "false". Then, try running with, say, 16 nodes with 16 MPI ranks per node to produce a static file without GWDO fields. Here, you'll need to use the x1.65536002.cvt.part.256 file to ensure correct interpolation of the static fields in parallel.

2) Next, set config_native_gwdo_static = true and all other pre-processing stages to "false". You'll also need to set the input file in the "streams.atmosphere" file to the name of the static file produced in (1) and the output file to some other name (e.g., "static_with_gwdo.nc"). Then, try processing the GWDO fields with 16 nodes and just one or two MPI ranks per node.

The GWDO processing is not memory efficient right now: each MPI rank reads in the entire 30-arc-second global terrain dataset, which is about 3.7 GB when typecast into 32-bit reals. So, to avoid exceeding the amount of memory on any node, you'll need to use just a few MPI ranks per node. Unlike the terrain, land use, etc. fields, the GWDO fields don't require any special CVT partition file to be processed in parallel, so you can just use Metis to create an x1.65536002.graph.info.part.16 file as needed.
 
There's no need to specify a clobber_mode attribute (and note that the default clobber_mode is "never_modify"), since no files will be overwritten. As mentioned in step 2:
You'll also need to set the input file in the "streams.atmosphere" file to the name of the static file produced in (1) and the output file to some other name (e.g., "static_with_gwdo.nc").
 
Thank you Dr. Mgduda for the confirmation and suggestion. I am tracking down what might be the issue in running the atmosphere checking one by one all.


mgduda said:
There's no need to specify a clobber_mode attribute (and note that the default clobber_mode is "never_modify"), since no files will be overwritten. As mentioned in step 2:
You'll also need to set the input file in the "streams.atmosphere" file to the name of the static file produced in (1) and the output file to some other name (e.g., "static_with_gwdo.nc").
 
Top