Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

Translate bottom boundary condition in WRF

This post was from a previous version of the WRF&MPAS-A Support Forum. New replies have been disabled and if you have follow up questions related to this post, then please start a new thread from the forum home page.

nikhil003

New member
I am trying to work out how to translate bottom boundary condition in WRF. I have been digging through the code use for moving nest as my case does have some similarities with that. As per my understanding, WRF compiled with DM+SM has patches which are on the distributed memory while the tiles within those patches are on the shared memory.

In order for me to have a translating bottom boundary, I will need to copy data between adjacent patches without worrying about the tiles. I am trying to locate the code which can perhaps be used as an example. I first thought that I can do it on individual patches, which is not difficult as i can simply assign the data on (i-1,j) to (i,j). However, this works out fine for the right most grid point, I am not sure how to get the data for the left most grid point, as it has be obtained from the patch to the left of this patch.

I would appreciate any suggestion on which code within WRF code base to look at for such type of operation. I even looked at the code used in wrf_bdy, where periodic boundaries to some extent use this idea, however, it checks whether the boundary indices are located on the same processor. This is just my chain of thought, I would welcome any correction to these ideas or perhaps the right way to approach this problem.
 
I have tried to put together a subroutine to work out the translation thing working. It seems that I can manage to get what i want if i don't have patches, however, when i have patches, then i am unable to assign data from left column to the right column i.e., i want to assign values at tsk(i,j) = tsk(i-1,j).

this is the main part of the subroutine, where I assign the values.
Code:
      its = max(i_start-1, ids)
      ite = min(i_end+1, ide)
      jts = max(j_start-1, jds)
      jte = min(j_end+1, jde)
      
          do j = jte, jts, -1
            do i = ite, its, -1
               i_grid = max(i-1, 1)
               if (grid%xland(i, j) .gt. 1.5) then
                 if (grid%sf_ocean_physics .eq. 1) then
                    grid%TML(i, j) = grid%TML(i_grid, j)
                    grid%HML(i, j) = grid%HML(i_grid, j)
                    grid%HUML(i, j) = grid%HUML(i_grid, j)
                    grid%HVML(i, j) = grid%HVML(i_grid, j)
                    grid%TSK(i, j) = grid%TSK(i_grid, j)
                 else if(grid%sf_ocean_physics .eq. 2) then
                    grid%TSK(i, j) = grid%TSK(i_grid, j)
                    grid%om_ml(i, j) = grid%om_ml(i_grid, j)
                    grid%om_lat(i, j) = grid%om_lat(i_grid, j)
                    grid%om_lon(i, j) = grid%om_lon(i_grid, j)
                  do k = okms, okme, 1
                     grid%om_tmp(i,k,j) = grid%om_tmp(i_grid, k, j)
                     grid%om_u(i,k,j) = grid%om_u(i_grid, k, j)
                     grid%om_v(i,k,j) = grid%om_v(i_grid, k, j)
                     grid%om_w(i,k,j) = grid%om_w(i_grid, k, j)
                     grid%om_s(i,k,j) = grid%om_s(i_grid, k, j)
                     grid%om_depth(i,k,j) = grid%om_depth(i_grid, k, j)
                  enddo
                endif
               endif
            enddo
        enddo

and following code calls the above subroutine.

Code:
    !$OMP PARALLEL DO   &
    !$OMP PRIVATE ( ij )
      DO ij = 1 , grid%num_tiles
    call move_bottom(grid                                                 &
     &        ,IDS=ids,IDE=ide, JDS=jds,JDE=jde, KDS=kds,KDE=kde          &
     &        ,IMS=ims,IME=ime, JMS=jms,JME=jme, KMS=kms,KME=kme          &
     &        ,IPS=ips,IPE=ipe, JPS=jps,JPE=jpe, KPS=kps,KPE=kpe          &
     &        , I_START=grid%i_start(ij),I_END=min(grid%i_end(ij), ide-1) &
     &        , J_START=grid%j_start(ij),J_END=min(grid%j_end(ij), jde-1))
    enddo
    !$OMP END PARALLEL DO

In order to make the above code work, I have defined HALO as well, however, i am not sure what is going wrong in this whole process. It is a rather trivial modification. At the ends of the tiles, I get step change which should not be the case. I would like some suggestion how to correctly modify the code.
 
Hi,
I would like to first apologize for the long delay in response since your initial email. It seems as though it was overlooked by the person responsible for that section at the time.

As for the question, I have passed this along to someone who will likely know this better than me. They will either respond soon, or I'll update the post when I receive a reply from them. Thank you so much for your patience.
 
There are a couple of questions:
1. I assume that this moving domain is a nest. Where do you get the upstream information for the nest? From the parent domain?
2. You mention that you have defined a HALO, I assume in the Registry. Do you force this communication prior to using the values of the variables that are off of the patch?
3. Have you printed out the values of the variables that are off of the patch? For example, when you say
i want to assign values at tsk(i,j) = tsk(i-1,j)
, have you been able to print out reasonable values for the RHS when on the patch edge?
 
Top