Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

WRF output precision

This post was from a previous version of the WRF&MPAS-A Support Forum. New replies have been disabled and if you have follow up questions related to this post, then please start a new thread from the forum home page.

sapostal

New member
Hi,

Is there a Fortran source file or module where the wrf output file (wrfout file) is written? There are some variables in wrfout that are truncated to one decimal point. Other variables are truncated to five or six decimals to the right of the decimal point. I do not know how the WRF source code is structured but need to access as much precision as possible. Is there any file that I can edit the precision of the variables in wrfout?

Thanks,
Sara
 
Sara,

Can you try to build WRF with double-precision? You can type

./clean -a

Then
./configure -r8

This will give you higher precision output.
 
Hi Ming,

Thanks for your response. I will try your suggestion. However, I think the problem is that when values are written to the wrfout they are truncated during the fortran write functions. What I want is non-truncated output.

Thank you,
Sara
 
Hi Ming,

There are some variables in wrfout that are truncated. I have even tried to build WRF with double-precision (./configure -r8). But it seems that with both, single precision and double precision variables are truncated to some specific digits to the right of the decimal point. (please see a small part of each output below which was produced with the ncl print command)

I think the problem is that when values are written to wrfout they are truncated during fortran write functions.

The reason that I need non-truncated values is that for my dissertation I have implemented a code that pre-rounds values so that they can be added in any order and always get the same result. For example, when doing a sum reduction with MPI_allreduce it won't matter how many nodes are used and in which order the values on the different nodes are added together. This implementation can be inserted in a scientific software where there is a sum reduction computation and will improve reproducibility.

But, the truncation of output values currently in WRF hides the reproducibility problem that occurs in global sum reduction. I have seen no difference between WRF output files when running one MPI tasks or with two, four, eight or sixteen MPI tasks.

What I want is non-truncated output so that the algorithm show how effective it is (what I mean is seeing differences of wrfout when running with different MPI tasks when there is a global sum reduction in the code).I do not know how the WRF source code is structured but need to access as much precision as possible. Is there any file that I can edit to change precision of the variables in wrfout?

In a separate note I mention that I dont have background in atmospheric science. My major is scientific computing and my test case in my research is WRF. So, sorry, if my questions are confessing.

This output is from a single precision build of WRF:
Variable: ZNU
Type: float
Total Size: 480 bytes
120 values
Number of Dimensions: 2
Dimensions and sizes: [Time | 3] x [bottom_top | 40]
Coordinates:
Number Of Attributes: 5
FieldType : 104
MemoryOrder : Z
description : eta values on half (mass) levels
units :
stagger :
.....
.....
(15) 0.3242757
(0,16) 0.2992108
(0,17) 0.2756645
(0,18) 0.2535449
(0,19) 0.2327653
(0,20) 0.2132448
(0,21) 0.1949069
(0,22) 0.1776801
(0,23) 0.161497
(0,24) 0.1462944
(0,28) 0.09415331
(0,29) 0.08303083
(0,30) 0.07258224
(0,31) 0.06276669
(0,32) 0.05354582
(0,33) 0.04488363
(1,20) 0.2132448
(1,21) 0.1949069
(1,22) 0.1776801
(1,23) 0.161497
(1,24) 0.1462944
(1,25) 0.1320128
(1,37) 0.01517455
(1,38) 0.008837154
(1,39) 0.002883723
......
......


This output is from a double precision (./configure -r8) build of WRF:
Variable: ZNU
Type: double
Total Size: 960 bytes
120 values
Number of Dimensions: 2
Dimensions and sizes: [Time | 3] x [bottom_top | 40]
Coordinates:
Number Of Attributes: 5
FieldType : 105
MemoryOrder : Z
description : eta values on half (mass) levels
units :
stagger :
....
....

(0,10) 0.4760370012103428
(0,11) 0.4417773607841126
(0,12) 0.4095934070404194
(0,13) 0.3793593804806093
(0,14) 0.3509571409888741
(0,15) 0.3242757061971813
(0,16) 0.2992108178192592
(0,17) 0.2756645342590774
(0,18) 0.2535448479019325
(0,19) 0.2327653255926935
(0,31) 0.06276667497400183
(0,32) 0.05354581784450729
(0,33) 0.0448836242067233
(0,34) 0.03674624635076922
(0,35) 0.02910188729583686
(1,25) 0.1320128033337337
(1,26) 0.1185965353748981
(1,27) 0.105993118000162
(1,28) 0.09415330308224457
.....
.....

Thank you,
Sara
 
There is no truncation of values that happens in WRF's write functions. Likely, what you're seeing is rounding that happens in NCL's print function. Have you tried examining the binary values in the wrfout netCDF files using, e.g., your own C or Fortran code, rather than looking at the decimal representation of values produced by NCL?

Regarding differenced due to order of summation in WRF, it may be difficult to find any examples of this. By design, WRF should give the same answers (in a bitwise exact sense) regardless of the MPI task count. From memory, I can't think of any place in the solver where values at grid cells depend on global reductions that happen for each grid cell independently (though, for example, we may multiply a field by a global value which is reduced on MPI task 0 and then broadcast to all other MPI tasks, ensuring that all tasks use the same value in the calculation). Just about every other numerical operator has been written in a way that ensures values from neighboring grid cells are used in the same order regardless of the position of the grid cell in the piece of the horizontal domain assigned to an MPI task.
 
Thanks for clarifying for me that there is no global reduction in WRF. That was what I doubted. Then I need to find another case study for my reserach that really has global reduction in the code.

Thanks again,
Sara
 
Top