Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

(RESOLVED) 7-day and 7x1-day runs give rather different results

This post was from a previous version of the WRF&MPAS-A Support Forum. New replies have been disabled and if you have follow up questions related to this post, then please start a new thread from the forum home page.

userwrfch

Member
Hi,

I am using WRF 4.1.3 to downscale hourly ERA5 data with a grid ratio of 1:3 (30 km, 10 km, 3.33 km, 1.11 km). I did some test runs to analyze the sensitivity of WRF towards the length of the simulation. The test runs have the exact same configuration and physics parameters and only differ in the length.
- Test 1: Run for 7 days (with 12 hours spin-up time)
- Test 2: Run for 1 day each (36 hours, with 12 hours spin-up time each)

Both tests ran with the same number of processors and nodes. The results show the same overall pattern, e.g. for temperature in degrees Celsius, but they are still quite different, with up to 4 degrees temperature difference per hour. Also, I cannot see a clear pattern that would show me that my spin-up time is not long enough. I attached the time series for both test 1 and test 2, as well as the absolute difference for temperature for four different locations of weather stations. It seems that the deviation between test 1 and test 2 is growing over the simulation time, however, this trend is not clear for all weather stations. As I am using reanalysis data, it should not be necessary to do a reinitialization of WRF every day, so I should be fine with running WRF for a month or a year.

Could you please let me know why there is this relatively large difference between the two test runs that only differ in the length of the simulation (minus the spin-up time)? Is this behaviour to be expected, or am I missing something? Any help would be appreciated.

Thanks!
 

Attachments

  • Time_series.png
    Time_series.png
    95.7 KB · Views: 502
  • Absolute_difference.png
    Absolute_difference.png
    81 KB · Views: 502
Hi,
I just want to make sure I understand the logistics of these 2 tests correctly. I understand your procedures as (as an example, I'll use dates 2021-02-01 - 2021-02-08):

Test 1
One single model run:
2021-01-31_12:00:00 - 2021-02-08_00:00:00

Test 2
Individual model runs:
2021-01-31_12:00:00 - 2021-02-02_00:00:00
2021-02-01_12:00:00 - 2021-02-03_00:00:00
2021-02-02_12:00:00 - 2021-02-04_00:00:00
2021-02-03_12:00:00 - 2021-02-05_00:00:00
2021-02-04_12:00:00 - 2021-02-06_00:00:00
2021-02-05_12:00:00 - 2021-02-07_00:00:00
2021-02-06_12:00:00 - 2021-02-08_00:00:00

And then I assume you're comparing the output at each hour (or some interval for which you also have output from Test 1?

If this is the case, the reason you are seeing differences is because the initialization time(s) for the simulations are not the same. When you initialize the model at a particular time, it uses the initial conditions as a starting point (first-guess). From there, the model physics and dynamics compute new solutions. In the above scenario, I would assume that the solution at 2021-02-02_00:00:00 is the same for both simulations? But then for the next simulation, you are starting all over with brand new initial conditions at 2021-02-01_12:00:00, whereas in Test 1, the values at 2021-02-01_12:00:00 were created by WRF, and not ERA5.

If I've misinterpreted, please provide more details to help me understand. Thanks!
 
Thank you very much for the reply! This is very helpful and answers my question!

Yes, this is exactly how I did it (except that for this test, I ran the 7-day runs for only 7*24 hours, and therefore could only compare 7*24-12(spin-up)=156 hours with the 1-day run results; that's why my initialization time is never the same and the output differs also for the first 24 hours).

Thanks for pointing out the initialization time. I would not have thought that it makes such a difference, as I am using input from reanalysis data every hour. Hence, I would have thought that the effects of the initialization would fade after a couple of time steps (i.e., some sort of spin-up time), because the boundary conditions would inform the model and “take over”.
 
Hello, WRF team

I found this resolved post because I am having a similar doubt.
And I would like to know which type of run is more suggested to obtain relatively more reliable results for the 7 days simulation? (1* 7 day run or 7 * 1 day run)

Thanks,

Chang
 
Top