Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

WRF Generating SST Anomalies Over Great Lakes

kaitlinkitch

New member
Hello everyone,

I recently ran a control simulation of Hurricane Sandy using WRF on Derecho. For my simulations, we will be perturbing the SSTs in the North Atlantic, so I am inputting my own SST fields into WRF. To do this, I am running WPS, editing the SST in the Metgrid files, and then using them to run WRF as normal. I successfully ran the model, but I have been looking at the SST fields in the wrfout files to verify WRF is "seeing" the correct SSTs. Upon my analysis, I found some subtle anomalies over the Great Lakes region, otherwise the SST fields are identical. I found that these differences are generated somewhere between the SST field in the wrflowinp files generated by real.exe and the SST field in the wrfout files. I attached a figure here showing the differences. The magnitudes seem to be on the order of 0.01-0.1K. I was wondering if anyone has experienced similar issues or if WRF struggles to resolve the Great Lakes?

This figure shows the differences between the SST in the wrfout file at one timestep compared to the SST in the wrflowinp file. This is on my innermost domain of a 3 nest domain simulation. It is run at 3km resolution. I found that when the SSTs across the simulation are held constant, the anomalies are no longer present. They only exist over the Great Lakes when the SSTs are allowed to vary.

Any insight is greatly appreciated!
 

Attachments

  • SST_Differences.png
    SST_Differences.png
    725.9 KB · Views: 10
Apologies for the delay in response while our team tended to time-sensitive obligations. Thank you for your patience.

Just to verify - if you were to run this without inputting your own SST values, there is no difference in values in the Great Lakes (or anywhere) between the inital and the next timestep, correct?

Can you attach your namelist.wps and namelist.input files? Please also let me know which version of WRF you're running. Thanks!
 
Last edited:
Yes! I am running WRF v4.5.2, and I attached my namelist files here. I use a resubmit script for running wrf, which only updates the start date and restart. Everything else is untouched for the simulation.
 

Attachments

  • namelist.input
    4.2 KB · Views: 5
  • namelist.wps
    833 bytes · Views: 4
Thanks for sending those, and apologies again for the delay. I've had to take more leave than usual over the past few weeks, but I will make sure to stay on top of this thread, from here out!

Just to verify - if you were to run this without inputting your own SST values, there is no difference in values in the Great Lakes (or anywhere) between the inital and the next timestep, correct?
I'll start looking at your files, and in the meantime, can you run this test, if you haven't already, and answer this question? I'm not sure if, when you mention that it runs okay without varying SST, if that is regarding whether or not you use your own SST input, or whether you're using a constant initial SST value, vs. updating the SST throughout the simulation. Thanks!
 
Last edited:
Okay, in addition to the question above, I have a few more questions:

(1) How did you modify the SST? Since this is a nested case, did you modify all met-em files (for all domains)?
(2) Just for verification purposes, the differences you're seeing are between the SST in the wrflowinp file at the initial time, and the SST in the wrfout file after a single time-step, right? Did you compare SST from the initial time in wrflowinp with SST in wrfinput and wrfout at 00 integration time?
(3) Did you compare SST in wrflowinp_d03 with SST in met_em.d03?

Thanks!
 
Last edited:
Thanks for getting back to me! Firstly, yes the SSTs at the initial timestep (00z) are identical to the SSTs at 01z. I had to rerun the control slightly differently, so I have multiple simulations where these same anomalies are occurring. For the first attempt at the control, I used the ERA5 GRIB files on Derecho, ran wps, and then ran wrf without modifying the SST fields myself. In this simulation, the SSTs were allowed to vary throughout the simulation, and the anomalies over just the Great Lakes persist throughout the 9 day simulation. In the second attempt at a control, I modified the SSTs by taking the same metgrids generated by WPS in the first simulation, and replaced the SST data with my own (same grid of course), and yes, I did this for all 3 domains.

To clarify your second point, my simulation begins at 00z and outputs hourly for 9 days. We allow the SSTs to vary from 00z to 12z then hold them constant for the duration of the simulation. The anomalies are present from 01z to 12z, then they disappear for the rest of the simulation (the SST fields are identical after this). The differences I am seeing are between the wrfout SST and the wrflowinp SST fields at the same time step (e.g. I subtracted wrfout SST (t = 1) - wrflowinp SST (t=1) ...).

And yes, when I compare wrflowinp_d03 with the met_em.d03 files, there are the same except for differences on the order of 10^-5 across the entire ocean that we just attributed to noise.


Sorry for the lengthy reply, but if any of this is not clear, I would be happy to clarify!
 
Thank you so much for that information. You mention Derecho. Are you also running this on Derecho, and if so, would you mind sharing your running directory with me so I can take a look at the files? Thanks!
 
Thanks for sharing that. I was speaking to a colleague, who informed me that SST is not updated at the initial time, so to compare the two, you should compare, for example, the SST from wrflowinp* at 08hr with SST from wrfout* at 09hr. If you haven't already done that, can you check to see if your files make sense if you try that type of comparison?
 
Yes! I did not realize there was a 1 hour lag between the wrflowinp files and the wrfout files. I checked using this method, and there are no differences! Thank you so much for the clarification, this was extremely helpful!
 
Hi Kaitlin,

I repeated your case and found the discrepancy between SST from wrflowinp and wrfout (Kelly has explained the details to you). Now I have a few questions regarding your case. We would like to gain some experiences on big cases like yours:

(1) I suppose you compile WPS in parallel mode, then run geogrid.exe and metgrid.exe using multiple processors. Would you please let me know your job scripts ?

(2) When you run real.exe and wrf.exe, how many nodes (and processors) did you use? Did you have any trouble running real,exe and wrf.exe? Please let me know where your job script is located in derecho.

A few users reported issues when running big cases using WRF. It is hard for us to repeat these big cases. Therefore, we would like to gather some information from our users. Thank you in advance.

By the way, WRF is being frozen and we are developing a new model MPAS, which is the cutting-edge numerical model that can do both global and regional simulations. This is just for your information. If you would like to get more information about MPAS, please see

 
Top