Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

No output real.exe using corine land cover

Dear all,
when i run the wrf (WRF 4.2.2 and WPS 4.2) using the standard land cover the execution is going well.

When I change land cover with corine 250 as suggested al the link:

geogrid.exe, ungrib.exe, and metgrib finish successfully but real.exe does not provide any results (such as wrfinput).

I attach to my request some files that could be useful to better understand my difficulty:
1) namelist.wps
2) namelist.input
3) GEOGRID.TBL
4) geogrid outputs
5) errore file from wrf.exe

Thank you for your help
 

Attachments

  • Nuova cartella compressa.zip
    3.9 MB · Views: 6
I would add to what has already been said:

For the working run (WITHOUT corine land cover), the metgrib outputs have:

....
z-dimension0012 = 12 ;
z-dimension0016 = 16 ;
z-dimension0024 = 24 ;
....
:NUM_LAND_CAT = 24 ;

For the NO working run (WITH corine land cover), the metgrib outputs have:

.........
z-dimension0012 = 12 ;
z-dimension0016 = 16 ;
z-dimension0028 = 28 ;
.........
:NUM_LAND_CAT = 28 ;

I would ask you if in the namelist.input file I have to chage some parameters to indicate 28 instead of 24.
 
Hi,
If no output is created when running real.exe, then an error occurred when running real and you should look at the rsl files for real.exe and fix the issues before moving on to run wrf.

Since you changed the number of land categories, you should add num_land_cat = 28 in the &physics section of namelist.input. There is typically a line with the default value (=21) in the namelist, so it must have been removed from your namelist.
 
Hi Kwerner,

thanks for your reply.
1)
I ran real.exe with both commands:
"real.exe > real.log" and "real.exe >& real.log" but after the run the "real.log" file is empty, contains no text.

2)
I added num_land_cat = 28 in the &physics section of namelist.input but nothing has changed.

3)
I have also corerct my GEOGRID.TBL file.
I replaced some rows in the text:
FROM:
name=LANDUSEF
priority=2
dest_type=categorical
z_dim_name=land_cat
landmask_water = corine:16,28 # Calculate a landmask from this field
landmask_water = default:16,28 # Calculate a landmask from this field
landmask_water = corine_usgs_250m:16,28 # Calculate a landmask from this field
interp_option = corine_250m:nearest_neighbor
interp_option = corine_usgs_250m:nearest_neighbor
interp_option = default:nearest_neighbor
rel_path = corine_250m:corine_2012v2020_250m/
rel_path = corine_usgs_250m:corine2usgs_2012v2020_250m/
rel_path = corine_500m:corine_2012v2020_500m/
rel_path = corine_usgs_500m:corine2usgs_2012v2020_500m/
rel_path = default:landuse_30s_with_lakes/

TO:
name=LANDUSEF
priority=2
dest_type=categorical
z_dim_name=land_cat
landmask_water = default:16 # Calculate a landmask from this field
interp_option = default:nearest_neighbor
rel_path = default:corine2usgs_2012v2020_250m/

Anyway it still doesn't work!!!

I attach the two error and output file 0000.

Thanks for your support.
 

Attachments

  • rsl.error.0000
    946 bytes · Views: 1
  • rsl.out.0000
    849 bytes · Views: 1
Last edited:
In addition:

1) as indicated in the instructions for use: "link the supplied *.TBL files to their default ones"
I have replaced the standard table (in wrf/run directory) with
a) LANDUSE_corine.TBL
b) MPTABLE_corine.TBL
c) VEGPARM_corine.TBL

2) as indicated in the instructions for use: "CURRENTLY ONLY WORKS WITH URBAN_PHYS=0, AND NOAH-MP (urban inclusion is on its way)"
a) I have used sf_urban_physics = 0


But I'm not sure what it is necessary do to use NOAH-MP.

I sincerely hope someone can help me.

Thanks
 
If your log files are empty after running real, and you're not getting the wrfinput* and wrfbdy* files, then real.exe did not run properly and you will not be able to run wrf.exe. I see that you are running wrf.exe with parallel processing - your rsl.* files indicate that you're using 40 processors to run this. Since you built the model with the distributed memory option for parallel processing, you need to run real.exe using the same type of command. For e.g., if for wrf.exe, you used something like

mpiexec -np 40 ./wrf.exe

you should use something similar for real. For e.g.,

mpiexec -np 40 ./real.exe

(or you can probably use fewer processors to run real)
 
If your log files are empty after running real, and you're not getting the wrfinput* and wrfbdy* files, then real.exe did not run properly and you will not be able to run wrf.exe. I see that you are running wrf.exe with parallel processing - your rsl.* files indicate that you're using 40 processors to run this. Since you built the model with the distributed memory option for parallel processing, you need to run real.exe using the same type of command. For e.g., if for wrf.exe, you used something like

mpiexec -np 40 ./wrf.exe

you should use something similar for real. For e.g.,

mpiexec -np 40 ./real.exe

(or you can probably use fewer processors to run real)
Dear Kwerner,
thans for your reaply.

As you suggested I ran the real.exe in parallel mode but nothing has changed. The execution was unsuccessful.
The real log file reports:


starting wrf task 19 of 40
.........................................................
starting wrf task 29 of 40

===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= EXIT CODE: 1
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================

This result does not surprise me since I have been running the WRF model for more than 5 years and I have always run the wrf.exe in parallel mode and the real.exe in sequential mode.

I really don't know how to proceed to fix this problem.

Andrea
 
Andrea,
When you ran real.exe in parallel mode, you should have gotten some rsl.* files. Can you package those files together in a single *.tar file and attach that so I can take a look? Thanks!
 
At the top of the rsl.* files you sent, it shows

Code:
taskid: 0 hostname: hpc-gpu-8-2-18.recas.ba.infn.it
 module_io_quilt_old.F        2931 F
Quilting with   1 groups of   0 I/O tasks.
 Ntasks in X            5 , ntasks in Y            8
  Domain # 1: dx = 16000.000 m
  Domain # 2: dx =  4000.000 m
WRF V4.2.2 MODEL

That lets me know it pertains to the wrf.exe simulation. But I need to see all of the rsl* files for the real.exe simulation. Can you please send those rsl files? Thanks!
 
As recommended I performed the real.exe in parallel.
In the working directory, the only rsl files present are those sent as attachments.

I'm sure I ran the real.exe because in the real.log file I find written:

starting wrf task 1 of 40
starting wrf task 18 of 40
............................
starting wrf task 35 of 40
starting wrf task 39 of 40

===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= EXIT CODE: 1
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================

What else can I do?
 
1) Can you remove all rsl* files from your wrf running directory?
Code:
  rm rsl*

2) And then re-run real.exe, using parallel processing (using the mpirun command - you shouldn't need to send anything to an additional log file, such as real.log because the model SHOULD put all output in the rsl files). The command should be something like:
Code:
mpirun -np 8 ./real.exe

3) Please send me a screenshot of the command you're issuing to run real.exe, or if you're using a batch script, please send that.
After that, please see if you have any new rsl* files in your running directory. If so, package those and attach them to your response here.

4) Please also issue the following in the wrf running directory, after running real.exe.
Code:
ls -ls >& ls.txt

and attach that file as well.
 
Dear Kwerner,

I am very grateful to you for your work and your interest.

I apologize for the delay but we had problems on the FARM.
I did what was requested (during the execution of the wrf.exe the rsl files produced by the real are overwritten).

I send the bath script in which it is possible to read all the settings of the namelists and the procedure used for the execution.
 

Attachments

  • CORINE_problem.zip
    85.8 KB · Views: 1
Thank you for sending that. Your rsl.error.0000 file for real.exe shows the following error:

Code:
----- ERROR: topo_wind          = 1 AND flag_var_sso    = 0
-------------- FATAL CALLED ---------------
FATAL CALLED FROM FILE:  <stdin>  LINE:     333
Either modify the namelist settings, or rebuild the geogrid/metgrid data
-------------------------------------------

This indicates that you may be missing the VAR_SSO field in your met_em* files. That variable should ultimately be in your geo_em* files as part of the geogrid.exe process. Can you check your geo_em* files to see if you see that variable? If not, you will need to obtain the varsso static field and reprocess geogrid and metgrid to include that data, as well. When running geogrid.exe, make sure you whatever data you're trying to use, you also use default data, as well. So for e.g., if you have your CORINE data in the WPS_GEOG/corine directory and you want to use that, you should set

Code:
geog_data_res = 'corine+default', 'corine+default'
 
Thank you so much for your reply and your suggestions.
Thank you for sending that. Your rsl.error.0000 file for real.exe shows the following error:

Code:
----- ERROR: topo_wind          = 1 AND flag_var_sso    = 0
-------------- FATAL CALLED ---------------
FATAL CALLED FROM FILE:  <stdin>  LINE:     333
Either modify the namelist settings, or rebuild the geogrid/metgrid data
-------------------------------------------

This indicates that you may be missing the VAR_SSO field in your met_em* files. That variable should ultimately be in your geo_em* files as part of the geogrid.exe process. Can you check your geo_em* files to see if you see that variable? If not, you will need to obtain the varsso static field and reprocess geogrid and metgrid to include that data, as well. When running geogrid.exe, make sure you whatever data you're trying to use, you also use default data, as well. So for e.g., if you have your CORINE data in the WPS_GEOG/corine directory and you want to use that, you should set

Code:
geog_data_res = 'corine+default', 'corine+default'


Thank you so much for your reply and your suggestions.

Both in geogrid and metgrid files the variable VAR_SSO is present.

GEOGRID:

COMMAND: ncdump -h geo_em.d02.nc | grep VAR_SSO

float VAR_SSO(Time, south_north, west_east) ;
VAR_SSO:FieldType = 104 ;
VAR_SSO:MemoryOrder = "XY " ;
VAR_SSO:units = "meters2 MSL" ;
VAR_SSO:description = "Variance of Subgrid Scale Orography" ;
VAR_SSO:stagger = "M" ;
VAR_SSO:sr_x = 1 ;
VAR_SSO:sr_y = 1 ;


METGRID:
COOMMAND: ncdump -h met_em.d02.2023-06-05_00\:00\:00.nc | grep VAR_SSO


float VAR_SSO(Time, south_north, west_east) ;
VAR_SSO:FieldType = 104 ;
VAR_SSO:MemoryOrder = "XY " ;
VAR_SSO:units = "meters2 MSL" ;
VAR_SSO:description = "Variance of Subgrid Scale Orography" ;
VAR_SSO:stagger = "M" ;
VAR_SSO:sr_x = 1 ;
VAR_SSO:sr_y = 1 ;

In any case I'll try to modify the "geog_data_res" parameters and update you on the results.
 
The problem is resolved.
I run in WRF 4.2 but I used the GEOGRID.TBL for WRF 3.9.
In the new GEOGRID.TBL is added the new row:
flag_in_output=FLAG_LAKE_DEPTH.

So I added this row and all work well.

Thank you for your valuable support.
Andrea
 
Top