Often, the differences arise from

round-off error in floating-point operations, which may be handled differently by different compilers and machines. For example, compilers may use optimized math libraries or they may convert a division into a multiplication by the inverse, and different processors may provide different representations of floating-point numbers (e.g.,

extended precision) or use different

rounding modes by default. Regardless of the source, the differences, which may initially be very small (around "machine epsilon") can grow quickly over time in chaotic systems, leading to qualitative differences in the model results. Judt (

JAS, 2018) nicely illustrates this error growth.

Below is a simple Fortran program that illustrates the different results that can result from order of summation.

Code:

```
program assoc
real :: x, y, z, w1, w2
x = 1.0
y = 2.0**(-24)
z = 2.0**(-24)
w1 = (x + y) + z
w2 = x + (y + z)
write(6,*) (w1 - 1.0), (w2 - 1.0)
stop
end program assoc
```

In exact arithmetic, (x + y) + z is identical to x + (y + z), yet this is not in general true in floating-point arithmetic. Compounding matters, in numerical models like WRF that contain parameterizations of complex physical processes, there are often conditional statements that depend on floating-point values, and it's easy to imagine how these can amplify differences by causing entirely different code to be executed, depending on the compiler; for example (building on the Fortran, above):

Code:

```
if (w1 > 1.0) then
... call some subroutine to handle the case where w1 > 1.0 ...
else
... call a different subroutine to handle the case where w <= 1.0 ...
end if
```

If instead, w1 was computed in the same way as w2, then an entirely different subroutine would be called.