Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

WRF compilation not finding librsl_lite.a

This post was from a previous version of the WRF&MPAS-A Support Forum. New replies have been disabled and if you have follow up questions related to this post, then please start a new thread from the forum home page.

Hello,
I'm new to WRF and the goal is to build an MPI-enabled code (dm). The WRF version is 4.1, the OS is Ubuntu 18.04 and the GNU compilers are 7.4.0. However, the compilation (em_real) is producing the error:
Code:
gfortran: error: /home/ubuntu/WRF/external/RSL_LITE/librsl_lite.a: No such file or directory
It's not building that library and scrolling up shows another error message:
Code:
f951: Fatal Error: Reading module ‘duplicate_of_driver_constants’ at line 1 column 2: Unexpected EOF
This file is at /home/ubuntu/WRF/external/RSL_LITE. I don't see any other error.
Thanks.
 
Hi,
Can you attach your configure.wrf file, along with the full compile log? To attach files, while the text box is open, click on the tab below with 3 horizontal lines and follow directions for attaching files.
 
Hi @kwerner,
I got two compilations logs: a regular output file (./compile_em > output) and the other that I created from the screen output (showing the error). The files are attached. Thanks.
 

Attachments

  • configure.wrf
    20.1 KB · Views: 60
  • output_GNUcompilers.txt
    49.7 KB · Views: 55
  • outputscreen_GNUcompilers.txt
    8 KB · Views: 49
Thanks for sending those. Okay, I think I want you to go through this compiling tutorial:
https://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php

Follow the steps exactly - even installing the libraries as mentioned (regardless of whether you have them installed elsewhere) and setting the paths as it states. Make sure you pass all of the environment tests, and then the library/compiler tests before moving on to compile. Before compiling again, set it to only compile with 1 processor - something like:
setenv J "-j 1" (or if using bash: export J="-j 1"). Then make sure to issue a 'clean -a' first, then reconfigure, and then recompile, sending the standard error and output to a log file, using this syntax:
Code:
./compile em_real >& log.compile
If it fails again, please send the new configure.wrf and log.compile file.
 
Hello @kwerner,
It seems that the WRF compilation with the GNU compilers finally went through. Completing the tutorial would probably have taken a big chunk of the day so I just cleaned the previous attempt and sourced the environmental variables (this was probably the key to fix previous problems). Also, I had to eliminate -DBUILD_RRTMG_FAST=1 from the wrf file, then the executables were built. It has been a while since I read MPICH as the recommended MPI wrapper; I'm using OMPI as the main wrapper and IntelMPI as the secondary one.
Even though the executables were built and the code runs, it is failing with the following error:
Code:
PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
Does PMIx need to be installed in the system?
[Just for clarification, the run producing this error is the CONUS benchmark (downloaded from GLOBUS) so I'm also wondering if this error might be specific to this specific example]
Thanks.
 
Hi,
I'm glad that you were finally able to get it compiled. That is great news!
As for the PMIX problem, I'm not exactly sure what that is. That is not part of WRF, and unfortunately may be something you'll need to discuss with a systems administrator at your institution.
 
Hi @kwerner,
PMIx is a management standard/library for HPC performance (particularly targeting exascale infrastructure). It's still under development but release 3.1.4 is contemporary to the WRF version found at Globus, which led me to think that it might have been included to improve the benchmark performance. Whether it's intentional, the WRF compilation seems to be triggering OMPI to look for PMIx-enabled infrastructure. I can begin troubleshooting it from here and will contact the OpenMPI team if necessary.
Thanks.
 
Top