Dear WRF-help and WRF users,
I have a single-domain model run at a grid spacing of 25km, and another single-domain model run at a grid spacing of 12km. These two simulations are using exactly the same model setup, boundary conditions, physics, same center_lat and center_lon for the domain. The only difference is they are on different spatial resolutions. However, the output precipitation from these two simulations for each time step is not only different in spatial resolution, they can be different in terms of geospatial distribution too (e.g, in one run there is rain over Colorado, while the other run does not have rain over the same exact regions).
I wonder how should we understand this? A couple of things I can think of to explain this (1) although their domain center is the same, their domain coverage could be slightly different, which can cause difference in output (nudging is not used here). (2) because their grid spacing is different, their time step is also different, which may also cause the difference? (3) what else do you think can explain the difference?
I am asking this because I am trying to frame this problem as a low and high resolution machine learning problem. In typical images, e.g., cats/dogs, the low and high resolution would be only different in the sharpness of the picture. the eyes are at the same place, so are the noses. But for 25km and 12km precipitation they can be different in spatial locations (e.g., eyes are not in the same place). Hopefully I am not confusing you
I am attaching four random time steps from the output of 12km and 25km. They are the same time steps for these two different runs. Note that the for these plots, I regrid the WRF grid to regular grid and then do the plots. but same thing is seen using the original output for plotting.
Thanks!
I have a single-domain model run at a grid spacing of 25km, and another single-domain model run at a grid spacing of 12km. These two simulations are using exactly the same model setup, boundary conditions, physics, same center_lat and center_lon for the domain. The only difference is they are on different spatial resolutions. However, the output precipitation from these two simulations for each time step is not only different in spatial resolution, they can be different in terms of geospatial distribution too (e.g, in one run there is rain over Colorado, while the other run does not have rain over the same exact regions).
I wonder how should we understand this? A couple of things I can think of to explain this (1) although their domain center is the same, their domain coverage could be slightly different, which can cause difference in output (nudging is not used here). (2) because their grid spacing is different, their time step is also different, which may also cause the difference? (3) what else do you think can explain the difference?
I am asking this because I am trying to frame this problem as a low and high resolution machine learning problem. In typical images, e.g., cats/dogs, the low and high resolution would be only different in the sharpness of the picture. the eyes are at the same place, so are the noses. But for 25km and 12km precipitation they can be different in spatial locations (e.g., eyes are not in the same place). Hopefully I am not confusing you
I am attaching four random time steps from the output of 12km and 25km. They are the same time steps for these two different runs. Note that the for these plots, I regrid the WRF grid to regular grid and then do the plots. but same thing is seen using the original output for plotting.
Thanks!