Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

Invalid DateTime string in x1.655362.static.nc

This post was from a previous version of the WRF&MPAS-A Support Forum. New replies have been disabled and if you have follow up questions related to this post, then please start a new thread from the forum home page.

Sylvia

New member
Hello,

I am trying to run the MPAS at 30 km resolution mesh. The x1.655362.static.nc seems fine since I get no error message during the generation of the this file until I use it to make initial conditions.

I get a segmentation fault error from the system and the init_atmosphere log says "ERROR: Invalid DateTime String (invalid date substring)".

I re-compiled the model with 'DEBUG=true' and rerun init_atmosphere, and the log says "forrtl: severe (408): fort: (7): Attempt to use pointer CALENDAR when it is not associated with a target".

I checked the x1.655362.static.nc with "ncdump -v xtime x1.655362.static.nc", and found the xtime is not a date, but something like this "\000\346\207\001\000\000\000\000\001\346\207\001\...".

In addition, I run the same case with same namelist* and streams* files but with a different resolution (60 km) before, and all goes well. I have no idea why this happened.

I'm using Intel Compilers (icc, ifort, icpc, mpiicc, mpiicpc, and mpiifort) from 2018.

Any suggestions are appreciated!

Sylvia
 
The problem was solved by changing the Intel Compilers (icc, ifort, icpc, mpiicc, mpiicpc, and mpiifort) from 2018 to Intel Compilers (icc, ifort, icpc, mpiicc, mpiicpc, and mpiifort) from 2019 or 2020.
 
Sylvia said:
The problem was solved by changing the Intel Compilers (icc, ifort, icpc, mpiicc, mpiicpc, and mpiifort) from 2018 to Intel Compilers (icc, ifort, icpc, mpiicc, mpiicpc, and mpiifort) from 2019 or 2020.

Same problem, and could you tell me how to download Intel Compilers 2019 or 2020 ?
Thanks!
 
Geess_321 said:
Same problem, and could you tell me how to download Intel Compilers 2019 or 2020 ?
Thanks!
Out of curiosity, which versions of the PIO, PnetCDF, and netCDF libraries are you using? I'm wondering whether there may be library issues that are leading to a corrupted 'xtime' field, which then causes problems in the MPAS timekeeping module.
 
mgduda said:
Geess_321 said:
Same problem, and could you tell me how to download Intel Compilers 2019 or 2020 ?
Thanks!
Out of curiosity, which versions of the PIO, PnetCDF, and netCDF libraries are you using? I'm wondering whether there may be library issues that are leading to a corrupted 'xtime' field, which then causes problems in the MPAS timekeeping module.

Thanks for your reply!
I download the libs from https://www2.mmm.ucar.edu/people/duda/files/mpas/sources using PIO2, pnetcdf-1.11.2, netcdf-c-4.6.3, netcdf-fortran-4.4.5.
This issue can be easily solved by setting the io_type="netcdf4" in streams.init_atmosphere when create the static file.
 
@Geess_321 - Thanks very much for pointing out the io_type="netcdf4" solution! I have a suspicion that there may be an issue in the parallel-NetCDF (pnetcdf) library; we've run into problems in writing character-type variables via pnetcdf in the past (though I'm struggling to recall the specific circumstances).

We've found the parallel performance of the netCDF4 library to be rather bad, which is why we've been using the pnetcdf library as our default I/O option.
 
Geess_321 said:
Sylvia said:
The problem was solved by changing the Intel Compilers (icc, ifort, icpc, mpiicc, mpiicpc, and mpiifort) from 2018 to Intel Compilers (icc, ifort, icpc, mpiicc, mpiicpc, and mpiifort) from 2019 or 2020.

Same problem, and could you tell me how to download Intel Compilers 2019 or 2020 ?
Thanks!

Sorry, I'm not sure where to download the Intel parallel studio 2019 or 2020. The cluster I use contains these packages.
 
thomasuh said:
@Sylvia @ Geess_321

What is your underlying cluster file system? Lustre, IME, GPFS or BeeGFS?

Our cluster file system is none of these. In fact, our cluster doesn't have a specific cluster file system.
 
Top