Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

wrf.exe error on era-5 data

Arseny

New member
Hi everybody! I was trying to run WRF on era-5 data with 1-h step. WPS launched sucсessfully. i still can't run wrf at all it could be connected to lauching real.exe where we have some strange logs. I provide my namelist.input, namelist.wps and the error. Can you help me with this problem, thank you in advance.
The error during lauching wrf is following:

DYNAMICS OPTION: Eulerian Mass Coordinate
alloc_space_field: domain 1 , 116495852 bytes allocated
RESTART run: opening wrfrst_d01_2022-01-01_00_00_00 for reading


logs in real.exe are following:
HDF5-DIAG: Error detected in HDF5 (1.10.5) thread 0:
#000: /tmp/CMake-hdf5-1.10.5/hdf5-1.10.5/src/H5D.c line 906 in H5Dset_extent(): unable to set extend dataset
major: Dataset
minor: Unable to initialize object
#001: /tmp/CMake-hdf5-1.10.5/hdf5-1.10.5/src/H5Dint.c line 2896 in H5D__set_extent(): unable to mark dataspace as dirty
major: Dataset
minor: Can't set value
#002: /tmp/CMake-hdf5-1.10.5/hdf5-1.10.5/src/H5Dint.c line 3183 in H5D__mark(): unable to pin dataset object header
major: Dataset
minor: Unable to pin cache entry
#003: /tmp/CMake-hdf5-1.10.5/hdf5-1.10.5/src/H5Oint.c line 1216 in H5O_pin(): unable to protect object header
major: Object header
minor: Unable to protect metadata
#004: /tmp/CMake-hdf5-1.10.5/hdf5-1.10.5/src/H5Oint.c line 1097 in H5O_protect(): unable to load object header chunk
major: Object header
minor: Unable to protect metadata
#005: /tmp/CMake-hdf5-1.10.5/hdf5-1.10.5/src/H5AC.c line 1352 in H5AC_protect(): H5C_protect() failed
major: Object cache
minor: Unable to protect metadata
#006: /tmp/CMake-hdf5-1.10.5/hdf5-1.10.5/src/H5C.c line 2345 in H5C_protect(): can't load entry
major: Object cache
minor: Unable to load metadata into cache
#007: /tmp/CMake-hdf5-1.10.5/hdf5-1.10.5/src/H5C.c line 6685 in H5C_load_entry(): incorrect metadatda checksum after all read attempts
major: Object cache
minor: Read failed
HDF5-DIAG: Error detected in HDF5 (1.10.5) thread 0:
#000: /tmp/CMake-hdf5-1.10.5/hdf5-1.10.5/src/H5D.c line 906 in H5Dset_extent(): unable to set extend dataset
major: Dataset
minor: Unable to initialize object
#001: /tmp/CMake-hdf5-1.10.5/hdf5-1.10.5/src/H5Dint.c line 2896 in H5D__set_extent(): unable to mark dataspace as dirty
major: Dataset
minor: Can't set value
#002: /tmp/CMake-hdf5-1.10.5/hdf5-1.10.5/src/H5Dint.c line 3183 in H5D__mark(): unable to pin dataset object header
major: Dataset
minor: Unable to pin cache entry
 

Attachments

  • namelist.wps
    565 bytes · Views: 7
  • namelist.input
    1.8 KB · Views: 9
I used wrf 4.1.4-mpi version and 4.0 wps. I tried different versions but it did not help.
I also used V_table.ECMWF and era-interim.ml and it did not help
 
Hi, The title of this post refers to an error while running wrf.exe, but you mention that it may actually be real.exe. For whichever one you're getting the error, can you please attach the full output file (e.g., rsl.error.0000)? Thanks!
 
Thank you! I could not attach files due to their size but i can give a link to all output:
wrf_output - Google Drive
When I set a time range to 1 day, the wrf error was folling:
MPASPECT: UNABLE TO GENERATE PROCESSOR MESH. STOPPING.
PROCMIN_M 1
PROCMIN_N 1
P 0
MINM 1
MINN 0
-------------- FATAL CALLED ---------------
module_dm: mpaspect
-------------------------------------------
 
Last edited:
Hi, The real.exe error log provides the error:
Code:
error opening met_em.d01.2022-02-20_13_00_00.nc for input; bad date in namelist or file not in directory
Can you check that you have the file "met_em.d01.2022-02-20_13_00_00.nc" in your WRF running directory? If you're still questioning the issue, from the WRF running directory, please issue
Code:
ls -ls met_em* >& met.txt
and attach that met.txt file. Thanks!
 
Thank you!

I provide the file, thanks.


I also run program on one day and got folling logs on wrf:







starting wrf task 0 of 3

starting wrf task 1 of 3

starting wrf task 2 of 3

../frame/module_io_quilt_old.F 2933 F

MPASPECT: UNABLE TO GENERATE PROCESSOR MESH. STOPPING.

PROCMIN_M 1

PROCMIN_N 1

P 0

MINM 1

MINN 0

-------------- FATAL CALLED ---------------

module_dm: mpaspect

-------------------------------------------



job aborted:

[ranks] message



[0] application aborted

aborting MPI_COMM_WORLD (comm=0x44000000), error 1, comm rank 0



[1-2] terminated



---- error analysis -----



[0] on DESKTOP-SJ6FUKP

C:/Users/nikwa/Documents/gis4wrf/dist/WRF-4.2.2-mpi\main\wrf.exe aborted the job. abort code 1



---- error analysis -----

Exit code: 1

Runtime: 0
After I started 1 day i got folling logs, I provide archieve of all wrf_run
 
Hi,
Instead of sending the met_em* file, itself, can you issue the command I mentioned in my previous post and attach the "met.txt" file? As for the new error, can you provide the file as a *.TAR package, instead of *.rar? Unfortunately we are unable to open .rar packages. Thanks!
 
When i am trying to run the single day (asssuming that the problem is in data) i also got an error:

starting wrf task 2 of 3
starting wrf task 0 of 3
starting wrf task 1 of 3
../frame/module_io_quilt_old.F 2933 F
MPASPECT: UNABLE TO GENERATE PROCESSOR MESH. STOPPING.
PROCMIN_M 1
PROCMIN_N 1
P 0
MINM 1
MINN 0
-------------- FATAL CALLED ---------------
module_dm: mpaspect
 
Hi,
Instead of sending the met_em* file, itself, can you issue the command I mentioned in my previous post and attach the "met.txt" file? As for the new error, can you provide the file as a *.TAR package, instead of *.rar? Unfortunately we are unable to open .rar packages. Thanks!
i attached requested files and also i tried for 1 day and it did not work ass well.
 
Thank you for providing those files, and I apologize for the delay. I've used all the files you provided in the run_wrf/ directory to try to repeat your issue, but I'm not able to do so. Everything runs to completion for me. I did have to set restart = .false. since there were no wrfrst* files available to use. I ran real.exe and wrf.exe, using 4 processors and it worked out okay.

I'm not very familiar with the error message you're seeing, but I believe it has something to do with your MPI application. It may be worth requesting the help of a systems administrator (IT) at your institution to see if they have any suggestions. Let us know if you figure anything out, as it may be helpful to someone else in the future.
 
Hi! Thank you! I'd like to mention, I use QGIS4WRF with precomplied versions of wps and wrf.
Thank you for providing those files, and I apologize for the delay. I've used all the files you provided in the run_wrf/ directory to try to repeat your issue, but I'm not able to do so. Everything runs to completion for me. I did have to set restart = .false. since there were no wrfrst* files available to use. I ran real.exe and wrf.exe, using 4 processors and it worked out okay.

I'm not very familiar with the error message you're seeing, but I believe it has something to do with your MPI application. It may be worth requesting the help of a systems administrator (IT) at your institution to see if they have any suggestions. Let us know if you figure anything out, as it may be helpful to someone else in the future.
 
Ah, that could potentially have something to do with it. We don't manage or support those versions of WPS or WRF. You may need to contact someone who works with QGIS4WRF to see if they're able to help.
 
Top