Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

MPAS limited area 3x slower than WRF regional run

davidovens

New member
I have set up an MPAS 8.3.1 4-km limited area run with 38 vertical levels and a 21000m top using the convection_permitting physics suite and am comparing this to a WRF 36/12/4-km run with a nearly identical boundary for the 4-km run (I used 20 points from the WRF boundaries to cut out the area from the MPAS 4-km quasi-uniform mesh x1.36864002.static.nc). I compiled MPAS on our Linux cluster using Intel 19.1.3.304 (2020) and OpenMPI 5.0.3 and used "-O3 -xHost" optimizations, just like I do for WRF. My WRF 4.1.3 runs are compiled and run with older Intel and OpenMPI but the 24-s timesteps for the 4-km domain are taking 0.55 wallclock seconds for WRF but are taking 1.58 wallclock seconds for MPAS. Has anyone else tried to compare WRF and MPAS timing and if so, what kind of timing results did you see? I have included log.atmosphere.0000.out and rsl.out.0000 extracts for comparison.
 

Attachments

  • log.atmosphere.0000.out.txt
    20.7 KB · Views: 0
  • rsl.out.0000.txt
    27.9 KB · Views: 0
Last edited:
Can you tell me the number of meshes in your regional MPAS domain? It seems that you have 38 vertical levels. Please confirm this is correct.

How many grids do you have in your WRF domain? Did you also run with 38 vertical levels?

Please upload your namelist.input (for WRF) and namelist.atmosphere (For MPAS) for me to take a look. I also need to know how many processors did you use to run the two models.

Thanks.
 
There are 150,412 cells in MPAS; WRF has 405x282= 114,210 grids. MPAS has 38 levels; WRF has bottom_top=37 (bottom_top_stag = 38). I used all 32 processors on a 32-cpu machine. Here are the namelists.
 

Attachments

  • namelist.atmosphere.txt
    2 KB · Views: 1
  • namelist.input.txt
    16.2 KB · Views: 0
Top