Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

new HPC center...hoping to run MPAS at this facility

dkmiller

New member
Can someone help me know if other software installations will be needed and the "Instance Type" optimal for running MPAS at this new HPC center?

The Initiative will offer the following HPC Resources:
  • Ready to Run: Pre-configured installations of:
  • Fun3D
  • Loci/Chem
  • ANSYS
  • Star-CCM
  • CTH
  • Cubit
  • MATLAB
  • CUDA
  • OpenFOAM
  • Paraview
  • VisIt
  • Pointwise
  • OpenMPI
  • Tecplot
  • AFSIM
  • ALE3D
  • Cart3D
Available Instance Types:
“od2” –2 x AMD Epyc 7543 [64 cores] / 256 GB RAM / HDR100 InfiniBand / node
“od3” 2 x AMD Epyc 9454 [96 cores] / 768 GB RAM / HDR100 InfiniBand / node
“od_hopper” – 2 x AMD Epyc 9455 [96 cores] / 2.3TB RAM / HDR100 InfiniBand / 2 x H200 GPUs / node [expected ~May 2025]

Thank you for any comments and suggestions!
 
The instance types seem reasonable.

For more information on the installs, please check some of the available information in the User Guide: Building MPAS - Prerequisites in MPAS-A User Guide

You may have these software already, but I think you'll need: MPI-enabled HDF5 (required by the next softwares), netCDF library with Fortran interfaces, parallel-netCDF library, (optional) ParallelIO. For GPU runs, you'll need to make sure the MPI library is built and configured for CUDA-aware MPI. You may need the CUDA MPS to be available if you want to run multiple MPI ranks per GPU (highly recommended).
 
The instance types seem reasonable.

For more information on the installs, please check some of the available information in the User Guide: Building MPAS - Prerequisites in MPAS-A User Guide

You may have these software already, but I think you'll need: MPI-enabled HDF5 (required by the next softwares), netCDF library with Fortran interfaces, parallel-netCDF library, (optional) ParallelIO. For GPU runs, you'll need to make sure the MPI library is built and configured for CUDA-aware MPI. You may need the CUDA MPS to be available if you want to run multiple MPI ranks per GPU (highly recommended).
Thank you for the VERY helpful reply and user guide link!
 
Top