Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

MPAS-A GPU vs. CPU discrepancies

nickheavens-cgg

New member
One thing I noticed when running the GPU-accelerated version of MPAS-A is some significant discrepancies between the results of the GPU-accelerated version from the CPU. I include one example from a 24 hour simulation at 60 km initialised from GFS at 0Z 16 May 2023. I ran the same test with a CPU version that was based on compiling the GPU 6.x branch without OpenACC directives The GPU-accelerated version has far fewer low clouds (image labeled 3 in the lower right corner). (And about twice the maximum cloud water in any layer). And there are many such differences. I did have a 24 km resolution simulation with the main branch (version 7.3 at the time), so I decided to take a look at the same fields. To my surprise (having processed all results with convert_mpas at 0.5 degree resolution), the lowest layer cloud water field looks nearly identical to the one in the 60 km version 6.x CPU version (image labeled 4 in the lower right corner).

I imagine that I am not the first person to notice this. I did find that Kim et al. (2020) noticed something similar with their single parameterisation GPU-accelerated MPAS-A version (GPU acceleration of MPAS microphysics WSM6 using OpenACC directives: Performance and verification) and mentioned correcting some issues with exponential functions. But I tried their fix, and it did not change the results, suggesting that the underlying issue might have been fixed in more recent compilers. Lagged radiation, of course, could play a role. But I'm curious if anyone has studied the implications for tuning/validating the GPU version.

Any guidance in this area would be appreciated.

Best regards,

Nick


1691072234780.png


1691072531718.png
 
Top