new HPC center...hoping to run MPAS at this facility

dkmiller

New member
Can someone help me know if other software installations will be needed and the "Instance Type" optimal for running MPAS at this new HPC center?

The Initiative will offer the following HPC Resources:
  • Ready to Run: Pre-configured installations of:
  • Fun3D
  • Loci/Chem
  • ANSYS
  • Star-CCM
  • CTH
  • Cubit
  • MATLAB
  • CUDA
  • OpenFOAM
  • Paraview
  • VisIt
  • Pointwise
  • OpenMPI
  • Tecplot
  • AFSIM
  • ALE3D
  • Cart3D
Available Instance Types:
“od2” –2 x AMD Epyc 7543 [64 cores] / 256 GB RAM / HDR100 InfiniBand / node
“od3” 2 x AMD Epyc 9454 [96 cores] / 768 GB RAM / HDR100 InfiniBand / node
“od_hopper” – 2 x AMD Epyc 9455 [96 cores] / 2.3TB RAM / HDR100 InfiniBand / 2 x H200 GPUs / node [expected ~May 2025]

Thank you for any comments and suggestions!
 
The instance types seem reasonable.

For more information on the installs, please check some of the available information in the User Guide: Building MPAS - Prerequisites in MPAS-A User Guide

You may have these software already, but I think you'll need: MPI-enabled HDF5 (required by the next softwares), netCDF library with Fortran interfaces, parallel-netCDF library, (optional) ParallelIO. For GPU runs, you'll need to make sure the MPI library is built and configured for CUDA-aware MPI. You may need the CUDA MPS to be available if you want to run multiple MPI ranks per GPU (highly recommended).
 
The instance types seem reasonable.

For more information on the installs, please check some of the available information in the User Guide: Building MPAS - Prerequisites in MPAS-A User Guide

You may have these software already, but I think you'll need: MPI-enabled HDF5 (required by the next softwares), netCDF library with Fortran interfaces, parallel-netCDF library, (optional) ParallelIO. For GPU runs, you'll need to make sure the MPI library is built and configured for CUDA-aware MPI. You may need the CUDA MPS to be available if you want to run multiple MPI ranks per GPU (highly recommended).
Thank you for the VERY helpful reply and user guide link!
 
The new HPC center does not have the capacity to run MPAS via GPU runs. Does anyone have specs/ stats on running any of the benchmark cases using CPU runs? We're trying to figure out if their system has been properly optimized for MPAS using CPU runs. Thank you for any information, comments, suggestions you can pass along!
 
We have a regional ("CONUS") 12-km benchmark case available from https://www2.mmm.ucar.edu/projects/mpas/benchmark/v8.3/12km_conus_benchmark.tar.bz2 that could be used to assess system performance with MPAS-A. The download contains a complete run directory with all input files, and all that's needed to run the case is to symbolically link in your compiled atmosphere_model program and provide your own jobs script.

On NSF NCAR's Derecho system, we're seeing the following integration times for a six-hour simulation:
  • 128 MPI tasks : 362 s
  • 256 MPI tasks : 168 s
The integration time is just the computational time needed to integrate for six simulation hours, with all model start-up time and I/O time factored out. After running the case, you can find the integration time in the log.atmosphere.0000.out file by checking the "time integration" timer in the summary at the end of the log. If you're seeing comparable timing at your HPC center, then it's likely that MPAS-A is running well.
 
Back
Top