Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

Why Does the Same WRF Run Produce Different Results?

Shak

New member
Dear WRF Community,


I am working on a July 05-06, 2018, moist convective case, which produces heavy precipitation near the Chicago area. I am using WRFV4.4 to simulate this case. I was able to simulate it successfully, and the results were well comparable with observations (Test1). However, when I ran the same simulation after 2 months (Test2), I found completely different and strange results, especially while comparing Reflectivity. I was supposed to see convection near the Chicago area at 1900 UTC on 5th July, which is completely missing in the second simulation. Please see the attached image. Note that Test2 is completely identical to Test1.

I have done several tests to get rid of it, including re-compiling WRFV4.4, using different WRF versions, different initialization times, different datasets, different physics schemes, etc. Unfortunately, I got the results without convection near the Chicago area for all those tests.

I am now at the point with a loss of ideas. Could anyone please help me to resolve this issue?

My working directory in Cheyenne: /glade/scratch/skarim/CROCUS_Dr_Kaplan/WRFV4.4/run/Test1 and Test2
I have also attached the namelist.input that I use for Test1 and 2.

I would be really grateful if someone could help me.
Thanks in advance.

With Kind Regards
ShakReflectivity_Test1 and Test2.jpg
 

Attachments

  • namelist.input
    3.9 KB · Views: 12
Hi Shak,
I looked at your namelist.input files for each of the cases, and I see that for Test1, you ran with 3 domains, and had feedback turned on. Although you don't have the domain 01 and domain 02 model output in the Test1 directory, I assume this is actually the namelist that was used for that simulation. If that's the case, then because you were running with feedback turned on, the calculations that took place on the nested domains were overwriting data on the parent domains before the model would integrate forward each time, and this would likely provide more accurate results for d01. If you only ran Test2 with a single domain, that could explain the difference.
 
Thank you, Ms. Werner, for your response.

I have run Test2 for 3 domains instead of a single domain with the feedback turned ON and it worked. The results are consistent now.
I appreciate your help.
 
Shak,
That is great news! I remember having a very similar issue when I was in grad school and it was so frustrating to not be able to reproduce the results!
 
Top