Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

MPAS-A stops and raises error with Noah MP land surface model option on global variable resolution 15km-3km mesh

jkukulies

New member
I have tried running the latest version of MPAS (v8.2.2) with the new option to choose the Noah MP instead of the Noah land surface model: config_lsm_scheme = 'sf_noahmp'

The model stops and raises the following error: CRITICAL ERROR: NaN detected in 'w' field.

Note that I have run the exact same setup but with the default config_lsm_scheme = 'sf_noah' without any problems.

I have tried this with both the convection_permitting and mesoscale_reference suite. With both physics suites, the model stops, but with the difference that it stops after the first model time step (18 seconds) for convection_permitting, and after 30 minutes and 18 seconds for mesoscale_reference. The error about nan values in the vertical velocity field only appears in the log.atmosphere.????.err files when choosing the mesoscale_reference suite. For the convection_permitting suite, there are also a couple of log*err files, but they do not include any error message.

I have also tried to run MPAS v.8.3.2 that includes the hotfix: GitHub - MPAS-Dev/MPAS-Model at hotfix-v8.2.3 but with the same result.

Attached is my namelist and the log files. My working directories on derecho:

/glade/work/kukulies/MPAS-Model and

/glade/work/kukulies/MPAS_v8.2.3/MPAS-Model

Thanks!
Julia
 

Attachments

  • log.atmosphere.0667.err.txt
    344 bytes · Views: 0
  • log.atmosphere.0000.out.txt
    89.2 KB · Views: 1
  • namelist.atmosphere.txt
    2.2 KB · Views: 0
Hi Julia,

I take a look at your case. What I found is that some variables required for running NoahMP is missing in your input. This error can be traced back to your static data, i.e., /glade/derecho/scratch/kukulies/mpas/variable_res_15k3k/CONUS_static.nc.

Would you please rerun mpas_init to create a new CONUS_static.nc, then try to run this case again? Please let us know if you still have problems.
 
Hi Ming,

Thank you so much for looking into this!

That is a very helpful hint already. Sure, I will rerun mpas_init and create a new CONUS_static.nc. Could you let me know what are the variables missing? Then I could also check if the problem was actually occurring when I ran mpas_init or if the issue is related to my intermediate files that are located in /glade/derecho/scratch/kukulies/era5.

Thanks again,
Julia
 
Hi, Julia,
I apologize for the mistake. Please discard my previous email, --- I mistakenly take another user's file as your file. I just double check and found that your static data is correct.
I will continue to look at your case and get back to you once I found something.
 
Hi Julia,
Would you please let me know what ERA5 data did you use to produce initial condition for your case? I would like to repeat this case from the beginning and hopefully to figure out what is wrong. Thanks.
 
I used the ERA5 files from the RDA and the code located at /glade/work/kukulies/wpsv4.6.0
I used get_era5_yyyy.csh to link the ERA5 data from /glade/campaign/collections/rda/data/
 
I also want to mention again, that the Noah MP option worked fine at the global uniform 15km mesh using the files and namelist located in

/glade/work/kukulies/MPAS-Model/mpas_c404_comparison/global_uniform_15k

I only get the error for the 15km-3km mesh.

Thank you, Ming!!
 
I had the same issue with MPAS variable resolution 15-3km mesh, running with sf_noahmp. The model would stop without error message after first a few timesteps. I tried with different timesteps (9s, 15s, 18s, 24s), the model stopped at different integration times. This issue doesn't occur with sf_noah.

I further tested it with 60-3km mesh, now this time the model could run successfully for one-day integration, with both sf_noahmp and sf_noah. (At least didn't stop right after a few timesteps). So it seems like this issue is only for the 15-3km mesh but not in 60-3km, would be curious to find out the cause.

Thank you Ming and Julia for your input and discussion. Also glad to do some tests from my end.
 
Hi Julia and Zhang Zhe,

I guess I know what is wrong in your case. This is related to the landmask issue in ERA5 data. Please see: Section 2.1.3.1 Land-Sea mask - Forecast User Guide - ECMWF Confluence Wiki

I use a python package to process EC data and avoid using WPS ungrib. The new initial data produced by this package works fine and I can successfully run Julia's case for 6 hours. Due to limitations on computation resource, I didn't run it to the full 24-hr. However, I believe it should work.

Please copy the files saved at /glade/derecho/scratch/chenming/MPAS-Forum/Data4Julia, then follow the steps below to create initial data:

(1) module load conda
(2) conda activate npl
(3) module load gcc
(4) f2py -c -m WPSUtils intermediate.F90
(5) python convert_sfc.py
(6) python convert_pl.py
(7) cat ERA5_SFC:2020-01-01_00 ERA5_PL:2020-01-01_00 > ERA5:2020-01-01_00

Let me know if you still have issues for running MPAS.
 
Hi Ming,

thanks for the update! I will try to regenerate the intermediate files. I will try it for one day first and see if it works and let you know, but that sounds reassuring that you were able to run the first 6 hours without any issues.

Regarding the processing of the ERA5 data: I plan to run this case for the entire year of 2020 and wonder what the best strategy for that is. Maybe just adding all hours in a day to stat = WPSUtils.intermediate.write_met_init('ERA5_PL', '2020-01-10_00') and run the script for each day of the year given that the pressure level files of ERA5 are available in daily rather than monthly files?

I am also wondering if I have to do all this today, since I am seeing a note that the read access to ERA5 grib files on RDA will cease on February 20th, 2025? NCAR RDA Dataset d633000

Thank you so much for your help!
Julia
 
Or is there a way to simply create the new land sea mask with python and add/replace only that variable in the intermediate files that I have already created for the entire year in /glade/derecho/scratch/kukulies/era5/?
 
Hi, Julia,
Would you please clarify how you intend to run MPAS over 1-yr period? How frequent do you want to reinitialize MPAS, and how long you will run after the initialization?
Or is there a way to simply create the new land sea mask with python and add/replace only that variable in the intermediate files that I have already created for the entire year in /glade/derecho/scratch/kukulies/era5/?
 
Hi, Julia,
Would you please clarify how you intend to run MPAS over 1-yr period? How frequent do you want to reinitialize MPAS, and how long you will run after the initialization?
Hi Ming,

yes, of course. This will be part of a bigger project and the most costly run of it. We intend to do a global free-run with no re-initialization. We are aware that this will most likely lead to a simulation that drifts quite far away from the realistic state but we want to understand how this drift in the large-scale circulation would affect regional weather statistics.

Thanks,
Julia
 
This is great! thank you very much Ming for looking into this issue.
Thanks Ming I can run your code to generate ERA5 intermediate file with rda ds633 era5 in netcdf files and I am able to run MPAS-NoahMP at 15-3km mesh, at least now integrating over 1h and no crash. Hurray!

Julia - I thought RDA will no longer store ERA5 in grib files but would still keep netcdf files, so wouldn't be a problem with this code using netcdf files, right?

Thanks again and much appreciated!
Zhe
 
Julia,

In this case, I suppose you will only initialize MPAS at 2020-01-01_00, then run MPAS continuously over 1-yr period. Please let me know if I am wrong.

If my understanding is correct, then you will only need a single time intermediate file for initialization. I don't think you need to process data for the entire year.

Please let me know if you have more questions.
 
This is great! thank you very much Ming for looking into this issue.
Thanks Ming I can run your code to generate ERA5 intermediate file with rda ds633 era5 in netcdf files and I am able to run MPAS-NoahMP at 15-3km mesh, at least now integrating over 1h and no crash. Hurray!

Julia - I thought RDA will no longer store ERA5 in grib files but would still keep netcdf files, so wouldn't be a problem with this code using netcdf files, right?

Thanks again and much appreciated!
Zhe
Hurray - thanks for the update, Zhe! That is great news :) and you are completely right, I can use Ming's code with the netCDF files. I did not look carefully enough before. Thanks Ming, your code works well and is very helpful!!
 
Julia,

In this case, I suppose you will only initialize MPAS at 2020-01-01_00, then run MPAS continuously over 1-yr period. Please let me know if I am wrong.

If my understanding is correct, then you will only need a single time intermediate file for initialization. I don't think you need to process data for the entire year.

Please let me know if you have more questions.
Sorry, Ming for the confusion. You are right that I only need the initiatization file for 2020-01-01_00 for the global run. One of the runs we will compare it to, however, will be a limited area run with either hourly or 3-hourly updates of boundary conditions. So that was the reason why I wanted to process the intermediate files for an entire year. But I think I figured out what to change in your python code to make it work for multiple files. Thank you so much again! I think this solves all the problems :)
 
Hi Julia and Zhe,
Thank you for the update. I am glad it works for you. Please let us know if you have more questions. We appreciate that you switch from WRF to MPAS and we will do our best to help make the switch as smooth as possible.
Ming
 
Top