I'm running 1km simulations (soon to move down to 250m) and I'm having memory issues using 'getvar' to extract the variables.
I can get T2, rh2, td2, uvmet10, uvmet10_wspd_wdir and slp for all times but the remaining variables (excluding lat/lon/ter), require a single time index. I've attempted to create a loop to go through each time index and combine the individual variables into one dataset but it crashes my kernel in Jupyter due to memory issues. For example, I have 181 time steps so for 'p', I loop through all 181 wrfout files, use 'getvar' to extract 'p' and then combine them into one new dataset with all times for variable 'p'. I've also tried using SLURM on my university's HPC but that crashes as well.
Is there any way to set this up to process after WRF finishes running using a similar .csh or .sh command to run on Linux so the variables are pre-processed and ready for visualization rather than having to try and process them in a Jupyter notebook?
I appreciate any help!
I can get T2, rh2, td2, uvmet10, uvmet10_wspd_wdir and slp for all times but the remaining variables (excluding lat/lon/ter), require a single time index. I've attempted to create a loop to go through each time index and combine the individual variables into one dataset but it crashes my kernel in Jupyter due to memory issues. For example, I have 181 time steps so for 'p', I loop through all 181 wrfout files, use 'getvar' to extract 'p' and then combine them into one new dataset with all times for variable 'p'. I've also tried using SLURM on my university's HPC but that crashes as well.
Is there any way to set this up to process after WRF finishes running using a similar .csh or .sh command to run on Linux so the variables are pre-processed and ready for visualization rather than having to try and process them in a Jupyter notebook?
I appreciate any help!