Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

Slow run with different physics options

janac4

Member
Hello everyone,
I have been running a WRF model for a while now with the following physics options:
Code:
&physics
 mp_physics                          = 3
 ra_lw_physics                       = 4
 ra_sw_physics                       = 4
 radt                                = 1
 sf_sfclay_physics                   = 1
 sf_surface_physics                  = 2
 bl_pbl_physics                      = 1
 bldt                                = 0
 cu_physics                          = 0
 cudt                                = 0
 isfflx                              = 1
 ideal_xland                         = 0
 num_land_cat                        = 20
 /

Now I need the same domain but with different physics, and I have chosen CONUS physics suite as following:
Code:
&physics
 physics_suite                       = 'CONUS'
 mp_physics                          = -1
 ra_lw_physics                       = -1
 ra_sw_physics                       = -1
 radt                                = 1
 sf_sfclay_physics                   = -1
 sf_surface_physics                  = -1
 bl_pbl_physics                      = -1
 bldt                                = 0
 cu_physics                          = 0
 cudt                                = 0
 isfflx                              = 1
 ideal_xland                         = 0
 num_land_cat                        = 20
 /

However this second run appears to be slower than the first one, is it possible? Is there a way to solve this?
Thank you.
 
When you use more expensive physics options, it can take a bit longer. Is the simulation significantly longer? You could try increasing the number of processors to make the simulation faster, but just make sure that you don't use too many. See Choosing an Appropriate Number of Processors.
Under the same conditions (machine, processors, initial conditions, etc) the second run take an extra hour to complete the simulation. Is there any option in the namelist that could reduce computational time? Maybe reducing vertical levels?
 
Yes, reducing the number of vertical levels could potentially help, but you don't want to go too low. The default value is 45 and you shouldn't have a value less than that. If you want to attach your namelist.input file, I can take a look and see if I notice anything that can be modified.
 
Yes, reducing the number of vertical levels could potentially help, but you don't want to go too low. The default value is 45 and you shouldn't have a value less than that. If you want to attach your namelist.input file, I can take a look and see if I notice anything that can be modified.
I'm already at 48 vertical levels so I guess I shouldn't change this. I attach my namelist in case there is a parameter that could be modified. Thanks a lot.
 

Attachments

  • namelist.input.txt
    5.7 KB · Views: 7
I would increase radt significantly and that will have a profound effect of increasing performance. You don't gain anything by calling radiation schemes every minute, especially as RRTMG is computationally expensive scheme. Feel free to use approx. 15 or even more minutes for any real case scenario, no matter what your dx is. Try to compare final results and you will see there is no reason to go under 10 min ever, probably even more.

You can try larger time step, too, here:
max_time_step = 7

max value = 7s for the dx = 1 km is too conservative in most cases. Feel free to go to 12 and see if your runs are still stable. If you get crashes, try to increase epssm to around 0.5. If still crashing, reduce maximum value down until you're stable, I think you can have larger max time step and still be stable compared to current value.

You might get some speed increase with numtiles > 1...
 
Last edited:
I would increase radt significantly and that will have a profound effect of increasing performance. You don't gain anything by calling radiation schemes every minute, especially as RRTMG is computationally expensive scheme. Feel free to use approx. 15 or even more minutes for any real case scenario, no matter what your dx is. Try to compare final results and you will see there is no reason to go under 10 min ever, probably even more.

You can try larger time step, too, here:
max_time_step = 7

max value = 7s for the dx = 1 km is too conservative in most cases. Feel free to go to 12 and see if your runs are still stable. If you get crashes, try to increase epssm to around 0.5. If still crashing, reduce maximum value down until you're stable, I think you can have larger max time step and still be stable compared to current value.

You might get some speed increase with numtiles > 1...
Thanks for your advice, I'm gonna try changing radt value. I was following user guide recommendation "Recommended 1 minute per km of dx (e.g. 10 for 10 km grid); use the same value for all nests", so that's why I was using radt=5 for my 5km domain and radt=1 for the smaller one.
 
Thanks for your advice, I'm gonna try changing radt value. I was following user guide recommendation "Recommended 1 minute per km of dx (e.g. 10 for 10 km grid); use the same value for all nests", so that's why I was using radt=5 for my 5km domain and radt=1 for the smaller one.
Hello,
I know but that rule of thumb is little bit misleading. It does have some sense on large dx values, but with decreasing dx, at the some point you get to a deminishing returns.
 
Top