LouisMarelle
New member
Hello,
I found a bug related to the adaptive time step in WRFV4.0. I found very high spurious T2 differences (up to several Kelvins) between sensitivity runs using almost similar setups when using the adaptive time step. These differences are not caused by normal differences in internal model variability between the runs. They are instead due to differences in the calculated incident solar radiation between the runs, which is obvious when looking at the very different accumulated total downwelling radiation (i.e. ACSWDNT) at the end of the different runs. This has a large direct influence on the calculated TSK and T2. When running "what if" scenarios these spurious differences between runs can unfortunately erroneously be interpreted as meaningful signal.
I found that this was due to the implicit asumption in the radiation driver that the radiative code is called every radt exactly (the zenith angle is calculated at time+radt/2, time being the time when the radiation code is called). However, when the adaptive time step enabled, the call to the radiation code is not performed exactly every radt, but after a time greater than radt has elapsed since the last radiation call. This is a relatively minor difference but it is apparently enough to cause these high temperature difference at some time steps.
I fixed this issue by syncing the adaptive time step to radt in dyn_em/adapt_timestep_em.F, so that the radiation code is called exactly every radt (similar to what is already done to sync the adaptive time step with the ouptut timestep). This correction makes the spurious T2 and TSK differences disappear. I am happy to share the corrected code with the development team if they need it.
Louis Marelle
Postdoc at LATMOS/IPSL
I found a bug related to the adaptive time step in WRFV4.0. I found very high spurious T2 differences (up to several Kelvins) between sensitivity runs using almost similar setups when using the adaptive time step. These differences are not caused by normal differences in internal model variability between the runs. They are instead due to differences in the calculated incident solar radiation between the runs, which is obvious when looking at the very different accumulated total downwelling radiation (i.e. ACSWDNT) at the end of the different runs. This has a large direct influence on the calculated TSK and T2. When running "what if" scenarios these spurious differences between runs can unfortunately erroneously be interpreted as meaningful signal.
I found that this was due to the implicit asumption in the radiation driver that the radiative code is called every radt exactly (the zenith angle is calculated at time+radt/2, time being the time when the radiation code is called). However, when the adaptive time step enabled, the call to the radiation code is not performed exactly every radt, but after a time greater than radt has elapsed since the last radiation call. This is a relatively minor difference but it is apparently enough to cause these high temperature difference at some time steps.
I fixed this issue by syncing the adaptive time step to radt in dyn_em/adapt_timestep_em.F, so that the radiation code is called exactly every radt (similar to what is already done to sync the adaptive time step with the ouptut timestep). This correction makes the spurious T2 and TSK differences disappear. I am happy to share the corrected code with the development team if they need it.
Louis Marelle
Postdoc at LATMOS/IPSL