Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

Spatially averaging data

This post was from a previous version of the WRF&MPAS-A Support Forum. New replies have been disabled and if you have follow up questions related to this post, then please start a new thread from the forum home page.


New member

I am wondering if you could give me some directions on how to do a horizontal spatial average of data during simulation of wrf. I was able to write a code that works perfectly in serial but when it came to parallel I am getting some issues due to the distributed memory (I assume). Below are the steps that I implemented.

1- First I created a variable in Registry.EM_COMMON as shown below
state real sp_avg_u1 k misc 1 - rh "sp_avg_u1" "Spatial average of u" "m s-1"

2- Second I created an averaging function and added it to dyn_em/solve_em.F as shown below

# include ""
! Calculate Spatial average
!$OMP PRIVATE ( ij )
DO ij = 1 , grid%num_tiles
CALL calculate_horizontal_spatial_average ( grid%u_2, grid%sp_avg_u1,
ids, ide, jds, jde, kds, kde, &
ims, ime, jms, jme, kms, kme, &
grid%i_start(ij), grid%i_end(ij), &
grid%j_start(ij), grid%j_end(ij), &
k_start, k_end )

The code takes the x-velocity as input, interpolates it to the mass center (or pressure center) and then sums it up into a 1D array sp_avg_u1 according to the height.

This code works perfectly in serial, but when I run it in parallel mode (ex: mpirun -np 4 ./wrf.exe) I am getting errors.

Is there a module I can use that allows me to send data across different nodes like HALO? Or do you know of a module that does MPI messaging that I can look at and implement.
Take a look at this post:
This isn't the exact issue you are having, but there may be some useful information that may help to guide you, regarding coding for distributed memory.
Hi Kwerner,

Thanks alot for your help, but due to the nature of arrays used in the above post I don't think it works for me.

In the above post they are talking about arrays that are horizontal in nature (ij), while my array is vertical (k). During a parallel run, the code allocates different parts of a horizontal array to different nodes. For example if we had an array called U22 with index i:0->100 and j:0->80, and we wanted to do a parallel run on 4 nodes, then the first node will have U22(i:0->49,j:0->39), then the second node will have U22(i:50->100,j:0->39), the third node will have U22(i:0->49,j:40->80) and fourth node will have U22(i:50->100,j:40->80). Thus the original U22 horizontal array got divide into 4 nodes, and if someone wanted to have the boundary values sent all they have to call is the proper HALO package.

My problem is that I have a vertical array (k), and during a parallel run, each node has its own copy of this vertical array. The elements of a vertical array does not get distributed, like a horizontal array. What I want is to have all nodes send their version of the vertical array to the master node so it can sum them up.

Appreciate any help you can provide Kwerner and thank you for your time again.

Take a look at the online presentation:

Specifically, example #4 is "Compute a Diagnostic". In this case, there is a global horizontal sum (and therefore horizontal average) that is computed.
Okay, I had missed that part of the instructions, it's good to see it can be done. Thanks for reminding us...