Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

Runtime error with v3.1.1

This post was from a previous version of the WRF&MPAS-A Support Forum. New replies have been disabled and if you have follow up questions related to this post, then please start a new thread from the forum home page.


New member
I get this error message when I try to compile WRF version 3 with OpenMP parallelism, at the final linking stage when I link all the executables to create the wrf.exe:

libwrflib.a(module_tiles.o): In function `module_tiles_mp_set_tiles2_':
module_tiles.f90:(.text+0x2532): undefined reference to `omp_get_max_threads_'

Any ideas how to fix this ?

It seems the intel compiler does not want to do any parallelism ...


  • compilation.log
    32.4 KB · Views: 1
  • configure.wrf
    21.2 KB · Views: 1
  • module_tiles.F
    16.2 KB · Views: 1
Can you let me know which version 3 (e.g., 3.8) you are compiling?
Do you know whether the OpenMP library was built with the version of Intel you are using to compile the WRF code?

I do not know for sure.

Intel® Parallel Studio XE was the package I bought. I assumed OpenMP was included in the package.

So do I need to download a separate GNU OpenMP library ?
There is a problem with OpenMP generally in my Intel compiler.

When I run a test program (attached) this is what happens:


gfortran -fopenmp omp_hello.f
[vaughanp@anders-lindroth Desktop]$ ./a.out
Hello World from thread = 1
Hello World from thread = 4
Hello World from thread = 2
Hello World from thread = 3
Hello World from thread = 0
Number of threads = 5
[vaughanp@anders-lindroth Desktop]$ ifort -fopenmp omp_hello.f
[vaughanp@anders-lindroth Desktop]$ ./a.out
./a.out: error while loading shared libraries: cannot open shared object file: No such file or directory

So you see there is a problem with the Intel fortran compiler.


  • omp_hello.f
    1.2 KB · Views: 0
I am compiling version 3.1.1 of WRF.

Now I am trying to compile with gfortran instead of the intel compiler in the hope that it works with OpenMP.
I have the WRF version 3.1.1 running with open mp for the Intel compiler. But there are run-time crashes in places like the module_big_step_utilities_em.f90::

-3 85 1 41 -3 85
forrtl: severe (408): fort: (2): Subscript #2 of the array TENDENCY has value 16 which is greater than the upper bound of 1

Image PC Routine Line Source
wrf.exe 0000000002725D73 module_big_step_u 2834 module_big_step_utilities_em.f90
wrf.exe 00000000027EF9A2 module_em_mp_init 1018 module_em.f90
wrf.exe 00000000028BFA75 module_first_rk_s 194 module_first_rk_step_part1.f90 00002B31E098CED3 Unknown Unknown Unknown
I have moved this question to the wrf.exe section of the forum, as this topic refers to running wrf, and not HP computing. Someone should answer your question soon.
I have a few suggestions about this case:
(1) Is it possible you update to newer version of WRF?
(2) if for some reason you need to stay with WRFV3.1.1, can you rebuild WRF in dmpar mode, the try again?

If the model still crash, please send me your namelist.input and namelist.wps to take a it. It is also helpful to tell me what data you use as the forcing data.
Sorry, it will take too much time to upgrade right now to a newer version of WRF. Maybe next year sometime.

Anyway the problem I mentioned earlier I solved, as I realised there are actually two OMP PARALLEL loops, one inside the other, for the microphysics driver. One OMP loop is in the solve_em.F around the call to the microphysics driver and the other is inside the driver, around the call to the microphysics routine (#ifndef RUN_ON_GPU).

I guess this might have been a bug (?).

Anyway when I try to correct this by eliminating the innermost OMP PARALLEL loop inside the microphysics driver, another problem emerges: I find the remaining OMP loop in solve_em.F runs perfectly with "ij" looping across tiles from the domain divided thread-wise. The only problem is that this loop does not run in parallel. The loop is executed serially on only one thread.

Attached is the configure.wrf file and the solve_em.F file.

Ideas welcome.


  • configure.wrf
    21.2 KB · Views: 49
  • solve_em.F
    260.4 KB · Views: 50
In fact, when I inspect all OMP loops throughout the file solve_em.F, with commands like this inside each OMP PARALLEL loop:

PRINT *, 'solve World from thread = ', TID

I see the model is not running in parallel at all in the solve_em layer anywhere.

It seems the model can only run in parallel in the files called from the solve layer (E.g. in *part1.F etc).
Please recompile WRFV3.1.1 in dumper mode, then rerun the case. I expect WRF should work fine one compiling with dmpar mode.