This post was from a previous version of the WRF&MPAS-A Support Forum. Please do not add new replies here and if you would like the thread moved out of the Historical / Archive section then contact us, making sure to include the link of the thread to be moved.
I downloaded the file from https://www2.mmm.ucar.edu/wrf/src/benchmark_large.tar.gz. As it doesn't have the 'wps' file, I was wondering if the coordinates are available (here my assumption is that it uses a Lambert projection). Thanks.
I'm mainly using IDV for postprocessing and the metadata (tab) also provides those details (what was I thinking?). Regarding the benchmarks, I'm a bit confused. The link is taken from the BAMS article preprint so I guess that these are the new preferred benchmarks (better running a few hours than 6 minutes). However, this brings 2 questions:
1 - Is there any more info about the HW than 'it was run on a HPC''? This doesn't provide any info for evaluating other systems because one doesn't really know what is the comparison with.
2 - On a minor note, this was run with GFS data (analysis) not NAM. Interesting, any specific reason or was it simply a choice?
Thank you. I wanted to make sure that was the paper you were referring to, as others have likely done similar things. To get back to your questions:
1) The paper mentions that we used the NCAR HPC, named Cheyenne. You can find information about the specifics of that here.
2) GFS is the input data we typically use for any sort of teaching and/or testing. We have shown reasonable results with it in the past and it's easy to obtain for the entire globe. The testing done here was not for checking the validity of output, compared to observations, but to simply test the timing of the NCAR HPC vs. AWS; therefore the input data type should not make a difference.