Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

Extracting timeseries data from idealized LES

This post was from a previous version of the WRF&MPAS-A Support Forum. New replies have been disabled and if you have follow up questions related to this post, then please start a new thread from the forum home page.

sudheer

New member
Hello,

I am trying to extract the timeseries data for velocity profiles during an idealized LES run. I have looked at the module_initialize_les.F and found that nl_set_cen_lat(1,40) and nl_set_cen_lon(1,-105) were called. So, I used 40, -105 as lat and lon in tslist file but got nothing. No files were created for timeseris. Also, when I checked the XLAT and XLONG variables in wrfinput_d01 file created by ideal.exe, they were set to 0.

Is there a namelist option I am missing or does it require any code modifications?

Below are the contents of my tslist file:
Code:
#-----------------------------------------------#
# 24 characters for name | pfx |  LAT  |   LON  |
#-----------------------------------------------#
LES Center location        test    40.    -105.

Thank you
 
Hi,
This option for idealized cases was actually just put into our code repository, to be released in our upcoming v4.1 version. However, you can read about it here and make modifications:
https://github.com/wrf-model/WRF/commit/43ac4e93fb29a753893152d8e228f3b8c8b96f42

If you are familiar with GitHub, you can clone the wrf-model/WRF repository and then check out the 'develop' branch, which should have the modified code to allow you to do this.
 
Hi,

Interestingly, I modified the share/wrf_timeseries.F file to do a similar task.

For now, I have hard-coded it to output time series at fixed i,j which I specified inside the wrf_timeseries.F

I just had to bypass this line, which doesn't activate in ideal LES cases.

Code:
grid%next_ts_time = 1

         grid%ntsloc_domain = 1 !ntsloc_temp <- Making sure to enter the loop

         DO k=1,grid%ntsloc_domain

and then hard-coding the locations.

Code:
DO i=1,grid%ntsloc_domain
! SR: Test: Hard-coding i,j locations - Feb-12-2019 - 
      ix = 20  !grid%itsloc(i)
      iy = 20 !grid%jtsloc(i)

This worked for me when I ran a test case overnight.

Thanks for pointing out the new build, I will get it from GitHub. Yesterday I checked the updates/changes for v4.0 on WRF users page and didn't find anything that can solve my problem. This is an interesting development, I am sure many others running high-resolution idealized simulations (like me) would be relieved.

Thank you
Sudheer
 
Great! I'm glad this was helpful for you, and yes, I hope it will be helpful to others in the future. Thanks for updating!
 
Hi Kelly,

I got the WRF-release-4.1 from GitHub and compiled it successfully. I can see the new changes made to WRF timeseries.F file. When I tried to run a test case in parallel using em_les example, it is getting stuck at some point in the code. Everything works fine when I run the same thing in series.

I added some Test messages to debug the Timeseries file and looks like, when running in parallel, the solver is not able to go into the last IF condition in the below snippet.

Code:
         ! Ideal case (which has a cartesian coordinate) or specified (i,j) in tslist
            IF (config_flags%map_proj == 0 .OR. grid%tslist_ij) THEN
               ts_rx = grid%itsloc(k)
               ts_ry = grid%jtsloc(k)
                 WRITE(message, '(A43,I3)') 'Test 1 ', grid%itsloc(k)
                 CALL wrf_message(message)
            ! Real-data case with input locations provided as (lat,lon)
            ELSE
               CALL latlon_to_ij(ts_proj, grid%lattsloc(k), grid%lontsloc(k), ts_rx, ts_ry)
            END IF

            WRITE(message, '(A43,I3)') 'Test 2 ', grid%jtsloc(k)
            CALL wrf_message(message)

            ntsloc_temp = ntsloc_temp + 1
            grid%itsloc(ntsloc_temp) = NINT(ts_rx)
            grid%jtsloc(ntsloc_temp) = NINT(ts_ry)
            grid%id_tsloc(ntsloc_temp) = k

            ! Is point outside of domain (or on the edge of domain)?
            IF (grid%itsloc(ntsloc_temp) < ids .OR. grid%itsloc(ntsloc_temp) > ide .OR. &
                grid%jtsloc(ntsloc_temp) < jds .OR. grid%jtsloc(ntsloc_temp) > jde) THEN
               ntsloc_temp = ntsloc_temp - 1
            WRITE(message, '(A43,I3)') 'Test 3 ', grid%id
            CALL wrf_message(message)

            END IF

This is the output from the rsl.output file written by WRF when run in Parallel. It is getting stuck here and not proceeding further.

Code:
Timing for processing wrfinput file (stream 0) for domain        1:    0.46871 elapsed seconds
Max map factor in domain 1 =  0.00. Scale the dt in the model accordingly.
start_em: initializing avgflx on domain   1
Computing time series locations for domain   1
                                    Test 1  10
                                    Test 2  10

This is the output when the same case is run in series.
Code:
Timing for processing wrfinput file (stream 0) for domain        1:    0.47054 elapsed seconds
Max map factor in domain 1 =  0.00. Scale the dt in the model accordingly.
start_em: initializing avgflx on domain   1
Computing time series locations for domain   1
                                    Test 1  10
                                    Test 2  10
Timing for Writing wrfout_d01_0001-01-01_00:00:00 for domain        1:    0.85289 elapsed seconds
 Tile Strategy is not specified. Assuming 1D-Y
WRF TILE   1 IS      1 IE     65 JS      1 JE     65
WRF NUMBER OF TILES =   1
 No land surface physics option is used: sf_surface_physics =            0
solve_em: initializing avgflx at time 0001-01-01_00:00:00 on domain   1
Timing for main: time 0001-01-01_00:00:00 on domain   1:    5.48752 elapsed seconds
Timing for main: time 0001-01-01_00:00:00 on domain   1:    4.16881 elapsed seconds

I am not sure what is wrong. The same test case runs fine on parallel when there is not tslist file present in the directory. So, I guess the issue is with the timeseries.

Thank you
Sudheer
 
Hi,
I'd like to apologize for the delay. We've been busy preparing for our upcoming V4.1 release and have gotten a bit behind with the forum posts. Thank you for your patience. Can you attach the namelist.input file you're using, along with the tslist?

Thanks,
Kelly
 
Hi Kelly,

Attached is the namelist.input file I am trying to run. For some reason, I am unable to upload the tslist file. Here is the exact content that is inside my tslist.

Code:
#-----------------------------------------------#
# 24 characters for name | pfx |   i   |    j   |
#-----------------------------------------------#
tower0001                 t0001    10       10

The same namelist file and tslist works fine on serial mode.

I have compiled the WRF using sm+dmpar option and using "ibrun" command to execute on parallel cores.

(I am not sure if this information is useful) I am currently running WRF on Texas Advanced Computing Center (TACC) supercomputer Stampede2 located at the University of Texas at Austin.

Thanks,
Sudheer
 

Attachments

  • namelist.input
    5.6 KB · Views: 67
Hi,
I just want to let you know that I've been able to repeat the problem. This seems to be the case with a real case too. We are looking into the problem. However, in the meantime, I found that even if you compile the em_les case in parallel mode, if you only use 1 processor to run it, it will work. If you are planning to use the namelist that you sent me, you should be able to run it with 1 processors with no problems, as the domain is very small. I'll let you know when we figure out a fix for the multiple processor problem.
 
Hi,
We have found a solution to the bug. Take a look at this pull request on GitHub that addresses this issue:
https://github.com/wrf-model/WRF/pull/836

If you click on the 'files changed' tab, you can see the file that was changed, and the modifications. If you make these changes to your code, you'll need to recompile the code after (but you don't need to issue a clean -a or reconfigure - just simply recompile).

Kelly
 
Top