Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

Unstable run with ERA-I data

This post was from a previous version of the WRF&MPAS-A Support Forum. New replies have been disabled and if you have follow up questions related to this post, then please start a new thread from the forum home page.

littleho_song

New member
Dear WRF support,

I run the WRF with 16km resolution and use ERA-I as input - i have tested Version 4-0.3 and 4.1, both versions are unstable with Domain set to be Europe and the date is Dec 12, 2008.
Here are my physics - welcome to test on it.

mp_physics = 8,
ra_lw_physics = 4,
ra_sw_physics = 4,
radt = 30,
sf_sfclay_physics = 1,
sf_surface_physics = 2,
bl_pbl_physics = 1,
sf_urban_physics = 0,
bldt = 0,
cu_physics = 1,
cudt = 0,
fractional_seaice = 1,
seaice_threshold = 0.0,
topo_wind = 0,
isfflx = 1,
ifsnow = 0,
icloud = 1,
surface_input_source = 1,
num_soil_layers = 4,
mp_zero_out = 0,
sst_update = 1,
tmn_update = 1,
sst_skin = 1,
prec_acc_dt = 60,
bucket_mm = -1,
bucket_j = -1,
 
Hi,
Can you provide more explanation regarding the model instability? Is the model stopping before completion? If you have rsl.* (or running/error log files) that you can attach, that would be helpful (if you have several rsl.* files, you can package them into one *.tar file and attach that). Please also attach your full namelist.input file.

Thanks,
Kelly
 
kwerner said:
Hi,
Can you provide more explanation regarding the model instability? Is the model stopping before completion? If you have rsl.* (or running/error log files) that you can attach, that would be helpful (if you have several rsl.* files, you can package them into one *.tar file and attach that). Please also attach your full namelist.input file.

Thanks,
Kelly

sure ... here it it
View attachment rsl.tar
 
Hi,
Thanks for sending those. I have a few suggestions, which may not be related to the problem, but we can start with these:

1) I would recommend setting debug_level = 0. This is an option that we have removed from the default namelist because it rarely does anything more than print out a lot of junk that makes your rsl* files very large and difficult to read through.

2) You should not have a time_step setting larger than 6xDX. Since you are using a 16 km resolution, a time_step of 120 is too large. Try setting that smaller and see if that makes any difference.

3) 24 processors should probably be enough for this run, but if you have access to more, you could try bumping that number up a little - perhaps somewhere between 40 and 100, just to see if that makes a difference.

After making those modifications, try running again. If it fails again, please attach the new namelist.input file, the new rsl* files (including rsl.out.* files), and if possible, attach your met_em.d01* files (only the first 2 time periods).

Thanks,
Kelly
 
Thanks for sending those. And yes, you likely should get a CFL error if the time_step was causing the problem. As I said, I didn't know if making any of the modifications would fix your problem, but as we know certain "rules," it's best to start by making sure things are set properly first.

Can you also attach your met_em.d01* files (the first 2 time periods)? If they are too large to attach, and you don't have another method to link them in, take a look at the home page of this forum for instructions on sending large files.

Thanks,
Kelly
 
I apologize. When I unpacked the tar file, I overlooked the new directory that was created that contained the met* files.
I was able to run a case using your namelist.input file, your met_em* files, with version 4.1. I tried this with a basic compile, and a compilation with no optimization. I tried with both a GNU and Intel compile. wrf.exe runs without any problem. Did you modify the code in any way, or are you using "out-of-the-box" code? Could it be possible that you are low on disk space in the directory where you are trying to write the wrfout* files?
 
hmm ... I didn't modify. Just dmpar with pgfortran + pgcc (54) - i have enough disk space.

Did you have pgi compiler?

What does this line mean?
2019-04-18 13:51:04.486 --- INFORM: Field LANDSEA.mask does not have a valid mask and will not be checked for missing values in the metgrid.log file?
 
I just compiled and tested with PGI, as well, and it runs without any problems. Unfortunately we are not able to repeat this, meaning that it's most likely a problem with your particular system/environment. You'll need to discuss the problem with your systems administrator to see if they are able to help solve the problem.
 
Thanks,

What does this line mean?
2019-04-18 13:51:04.486 --- INFORM: Field LANDSEA.mask does not have a valid mask and will not be checked for missing values in the metgrid.log file?
 
This question is specific to metgrid and is being answered here:
http://forum.mmm.ucar.edu/phpBB3/viewtopic.php?t=5323&f=31#p9863
 
Top