Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

Computing power necessary for global WRF/MPAS

This post was from a previous version of the WRF&MPAS-A Support Forum. New replies have been disabled and if you have follow up questions related to this post, then please start a new thread from the forum home page.


New member
Hi everyone! My team and I are pretty new to working with WRF and MPAS, we were wondering if anyone here had experience comparing the computing power needed for running either of the models on a global scale (at multiple resolutions). We were told that WRF at 3km resolution would be the best for our needs but also that it'd be impossible currently to run that on a global scale without massive supercomputers.

That said, we wanted to see how low a resolution we could get with the computing power that we have (or that we could acquire by cloud computing). We haven't yet been able to find specific details on how much power these models would require at these different resolutions on a global scale so any hints toward the right direction would be greatly appreciated. Thank you!
I put a copy of this question in the MPAS section so that their group will see it. It's not typically recommended to use the WRF model for global domains, so asking this question specific to MPAS will probably be your best bet. Someone will respond to this inquiry soon.
For both WRF and MPAS, the computational costs scale more or less linearly with the number of grid cells, and inversely with the integration time step. So, if you have the computational cost for one simulation, you can quickly estimate the cost of other simulations.

On the NWSC Cheyenne computer, the computational cost for a global, quasi-uniform 15-km simulation with 2,621,442 grid columns and 56 vertical layers is about 530 core-hours per simulated day. This includes no file output and the default "mesoscale reference" suite of physics parameterizations, with the radiation schemes called once every 15 simulated minutes; the model integration time step is 90 seconds.

For example, then, if we were to estimate the cost of a global, variable-resolution simulation on a mesh with 6,488,066 grid columns and a minimum grid distance of 3 km, which would probably dictate an integration time step of around 18 seconds, a rough estimate would be:

530 core-hours/day * (6488066 / 2621442) * (90 / 18) = 6559 core-hours per simulated day

A few other factors that influence computational cost include:
  • The suite of physics parameterizations that are used in the simulation.
  • The amount of model output written to the filesystem: more file output implies more time spent writing data.
  • The compiler used to build the model executables, as well as the optimization flags used for that compiler.
  • The number of MPI tasks used to run the model: there is a general downward trend in parallel efficiency as more MPI tasks are used in a model simulation. We find that on Cheyenne, MPAS-A scales with around 70% parallel efficiency down to around 100 grid columns per MPI task.
  • The simulation length: there is a cost associated with "starting up" the model (reading initial conditions, allocating memory, initializing data structures, etc.), and longer simulations will allow that start-up cost to be amortized over more integration time steps.