Meeting confusion while running the WRF-UCM model

Hello everyone,

I am currently encountering issues when using the CGLC-MODIS-LCZ-global dataset and the slucm data for my WRF-UCM experiment. According to the geo_em data viewed via ncview, the land use categories should be 61 types. The settings for URB_PARAM and FRC_URB2D are as follows:

===============================
name=URB_PARAM
priority = 1
optional = yes
dest_type = continuous
fill_missing = 0.
z_dim_name = num_urb_params
interp_option = default:nearest_neighbor
rel_path = default:slucm/urban_params/2010/H_avg/
flag_in_output = FLAG_URB_PARAM
===============================
name=FRC_URB2D
priority = 1
optional = yes
dest_type = continuous
fill_missing = 0.
interp_option = default: average_gcell(2.0)+four_pt
rel_path = default:slucm/urban_params/2010/lambda_p/
flag_in_output = FLAG_FRC_URB2D
===============================

Subsequently, I achieved successful results in the WPS-related computation steps.

However, when proceeding to run the WRF module (with sf_urban_physics = 1, 1, 1, 1), I frequently encounter segmentation faults. Why does this happen? Additionally, during previous runs, I confirmed that the model operates correctly without invoking the urban canopy module. Currently, my troubleshooting process has stalled.

Attached below are the following files for reference:
namelist.input and namelist.wps
 

Attachments

Your namelist.input looks fine except that time step is too small. Please change it to 72.

Another issue is that, you run with NoahMP but didn't set any options related to this scheme. This implies that you use all the default options for NoahMP.

Can you switch to Noah, or run with noahMP with correctly specified options?
 
Your namelist.input looks fine except that time step is too small. Please change it to 72.

Another issue is that, you run with NoahMP but didn't set any options related to this scheme. This implies that you use all the default options for NoahMP.

Can you switch to Noah, or run with noahMP with correctly specified options?
Regarding the issue of time step and the Noah module you mentioned, I made modifications but saw no changes—the error persisted. I then analyzed the information from these two posts (forrtl: error (78) and Timestep concluded that it might be impossible to successfully compute SLUCM under WRF4.5.1. Therefore, I switched to calculating with sf_urban_physics = 2 or 3, which eventually succeeded.

However, I now have a new doubt: my geographic data input set uses CGLC-MODIS-LCZ, while my URB_PARAM uses the slucm-distributed-drag package from Static Data Downloads. In theory, this should not be compatible with BEM and BEP calculations. So why was I able to run the simulation successfully?
 
Can you clarify what you mean by stating that "URB_PARAM uses the slucm-distributed-drag package from Static Data Downloads" and why do you think it is not compatibel with BEM-BEP?
1766643705067.png1766643757391.png
I used the "slucm-distributed-drag" data file provided on the Static Data Downloads webpage as my URB_PARAM input for the simulation, but according to its own documentation, this dataset does not support BEP+BEM calculations.
 
Can you clarify what you mean by stating that "URB_PARAM uses the slucm-distributed-drag package from Static Data Downloads" and why do you think it is not compatibel with BEM-BEP?
I am very sorry about this. Recently, I generated my own LCZ images using LCZ_Generator, and then realized my mistake. URB_PARAM should be a three-dimensional array of size (num_urb_params, south_north, west_east), but during my experiment I mistakenly thought that only the H_avg dataset was needed to run successfully (because I encountered no errors in subsequent steps and successfully completed the entire WRF model run). Now, through Best practice for adding high-resolution urban morphology data into WRF (UCM/BEP) via WPS, I have recognized and corrected the URB_PARAM issue. However, I am now facing a problem in the subsequent metgrib calculation:
Processing domain 3 of 4
ERROR: In read_next_field(), problems with ext_pkg_get_var_info()
application called MPI_Abort(MPI_COMM_WORLD, 23953) - process 0

===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= EXIT CODE: 145
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
Furthermore, I have noticed that the metgrib computation takes a very long time, yet the log content does not contain that much information. I don’t know where the problem is. I suspect there might be an error in the metgrib.TBL settings, or perhaps an issue with my NetCDF file. But I am not sure how to proceed with troubleshooting. Thanks for your help.
 

Attachments

This is more like a data issue. I am suspicious that your input data is not correct. Please recompile WPS in debug mode, (i.e., ./clean -a, ./configure -D, then recompile WPS). With the debug mode, the log file will tell excatky when and where your case crashes. This will give you some hints what is wrong.

I wonder why you need to generated your own LCZ? The data we provide for WPS is already at high resolution and I see no reason to create a new LCZ dataset.
 
This is more like a data issue. I am suspicious that your input data is not correct. Please recompile WPS in debug mode, (i.e., ./clean -a, ./configure -D, then recompile WPS). With the debug mode, the log file will tell excatky when and where your case crashes. This will give you some hints what is wrong.

I wonder why you need to generated your own LCZ? The data we provide for WPS is already at high resolution and I see no reason to create a new LCZ dataset.
  1. Because the default land use classification is not accurate, the dimensions of the subsequent FRC_URB2D and URB_PARAM are also inaccurate. Therefore, I produced an LCZ map suitable for my own study area.
  2. I know this is a data issue, because when I used the previous slucm dataset, I did not configure URB_PARAM correctly. At that time, I simply used H_avg as a substitute for the num_urb_params dimension in URB_PARAM. So now I would like to know how to generate and use a correct URB_PARAM parameter.
  3. Thanks for your help.
 
Back
Top