Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

compiling an ideal case with ifort/gcc combo

This post was from a previous version of the WRF&MPAS-A Support Forum. New replies have been disabled and if you have follow up questions related to this post, then please start a new thread from the forum home page.


New member
Hi all,

I have gotten different results from the same experiment using two different computers (one at TACC, and one a more ordinary work computer) and I'm trying to track down why the outcomes are so different. The only difference I can think of besides the computers themselves are the compiler options. On the TACC computer I used the ifort/icc combo one can choose from the preset configuration options. On the work machine, the model was compiled with a custom configure.wrf file using ifort and gcc, which I don't think is one of the available configuration options normally, at least not on this version of WRF (a customized version of WRF 3.5). I'm currently trying to compile the model on one of our work machines with one of the preset configuration options, but I was wondering: could it be that somehow using ifort and gcc together would give problematic model behavior, or should this be fine?

Hi Ben,
Different compilers and architects can certainly cause different results for identical cases. This is something we are aware of. Af for your ifort/gcc case, are you having problems with that run, or is it just different? I've never heard of using that combination together, and I'm not a software engineer, but I can ask a colleague if that could be problematic.
Hi, thanks for your reply. The model compiles and runs without any apparent issues using the ifort-gcc combination, so I thought things were fine until I ran the same experiment on the TACC machine. The models start to diverge not far into the run and ultimately give very different results. I trust the TACC version since it closely reproduces a previous experiment that was done by another group, so I was wondering if there could be some more subtle problem with using the ifort-gcc combination. If you can ask your colleague about it that would be great!

As Kelly said, even with starting with the condition of identical initial conditions, having different machines (or compilers, or optimization) will start a cascading effect in the model solutions. A sequence of solution bifurcations in an idealized model run (without any moisture or physics) tends to show relatively stable solutions (they tend to look the same on every machine, every compiler). There are few special-case IF tests in dynamics or transport.

However, once moist physics are introduced, then non-linear IF tests (and there are many of them) cause major changes to the solution evolution (for example, it rains here or not). Each evaluation of one of those modified IF conditions causes a different path to be taken through the physics portion of the source code.

This introduces a fundamental question of "is this simulation the same as that simulation", and we are not prepared to address that issue. Here are some considerations that may help you ascertain if the solutions are indeed each reasonable representations of a model end state.
1. If you use double precision, you may be able to delay the onset of solution departure.
2. Look at time periods close to the initialization time. Are you seeing expected evolutionary differences?
3. Turn off all optimization.

We have some statistical methods that we have used in-house to look at model output that should be considered "the same". For example, we use these statistical methods to determine the likelihood that the benchmark results produced by various vendors are valid data. This tends to be a bit of work to set up and only uses the first few times steps.