Scheduled Downtime
On Friday 21 April 2023 @ 5pm MT, this website will be down for maintenance and expected to return online the morning of 24 April 2023 at the latest

WRFV4.0 - compilation errors

This post was from a previous version of the WRF&MPAS-A Support Forum. New replies have been disabled and if you have follow up questions related to this post, then please start a new thread from the forum home page.

hristova

New member
Hello,

I am having trouble compiling WRF V4.0 for the em_real.

I used option 24 (dmpar) for architecture and 1 for nesting. See attached configuration and compilation log file.

Would you, please, help me find the problem?

Thank you!!
---------------------------------------------------------------------------
Dr. Svetla Hristova-Veleva Jet Propulsion Laboratory
Svetla.Hristova@jpl.nasa.gov Pasadena, CA 91109-8099
---------------------------------------------------------------------------
 

Attachments

  • configure.wrf
    22.2 KB · Views: 59
  • compile_em_real.log
    1 MB · Views: 63
The error message below:

module_io_int_read.f(105): error #5102: Cannot open include file 'mpif.h'
include "mpif.h"
------------^

indicates that the issue is not specific to WRF. The problem may be in the installation of MPI, or it may be related to your shell environment. If possible, it may be easiest to consult with your IT staff to work through issues.
 
Thank you very much for your response!

I am now using a bash shell. Would you recommend switching to another shell? Which one?

Thank you again!

Svetla
 
We use both bash shell and c shell. Have you talked to your IT staff ? I guess it is better talk to them first and make sure MPI is installed correctly.
 
Thank you for information!

I spoke with the IT support but they have not been able to help me yet.

Following the instructions on how to install WRF (http://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php#STEP4) I have installed the MPICH library (mpich-3.0.4) myself. The library is in a sudirectory called LIBRARIES which is parallel to the WRF subdirectory. I've run successfully all the compatibility tests that are listed on the WRF tutorial site

What do I need to check to know that the MPI is installed correctly? I am trying to compile on the TACC system - the TEXAS ADVANCED COMPUTING CENTER. The module that is loaded is cray_mpich/7.7.3

I would really appreciate any advice you could provide!

Best regards,

Svetla
 
Can you try compiling the attached test program with
Code:
mpif90 -o mpi_test.o -c mpi_test.f90
and see whether that works? This test program includes the "mpif.h" file, the path to which should be set by your mpif90 compiler wrapper.
 
This is not an WRF issue, I think. It is related to your system. I am sorry that I don't know how to fix it.
 
Thank you for trying!

I'll try some more options and might get back to you ...:)

One question - I see that I can load either serial version of NETCDF or a parallel one. What should I load if I want to compile with dmpar (option 24)

Thank you again for all your support!
 
Please load the serial version of netCDF.
Please keep me updated about the progress. I do hope you can find a solution, which might be helpful to others who have the same issue.
 
Dear Ming Chen,

Thank you for your support and willingness to help me!

I believe I was able to overcome the majority of the problems - I found out the proper way to load some of the needed modules and now I do not see the "catastrophic" error messages that "mpi.h" was not found.

However, I am still unable to compile WRF and create an executable. I think that the new error messages are still related to mpi but this time looking for something else. I am attaching here two files: the configure.wrf and the log file with the error messages when trying to compile em_real. At the bottom of this message is my .bashrc file. It is here just to show you how I am loading the modules and how I am specifying some environmental variables.

Please, take a look at all of these files and let me know what you think I am still missing, in terms of libraries, properly defining paths and setting up environmental variables.

I really appreciate all your help!!!

Best regards,

Svetla
--------------------------------------------------------
bash-3.2$ more .bashrc
# -*- shell-script -*-
# TACC startup script: ~/.bashrc version 2.1 -- 12/17/2013


# This file is NOT automatically sourced for login shells.
# Your ~/.profile can and should "source" this file.

# Note neither ~/.profile nor ~/.bashrc are sourced automatically by
# bash scripts. However, a script inherits the environment variables
# from its parent shell. Both of these facts are standard bash
# behavior.
#
# In a parallel mpi job, this file (~/.bashrc) is sourced on every
# node so it is important that actions here not tax the file system.
# Each nodes' environment during an MPI job has ENVIRONMENT set to
# "BATCH" and the prompt variable PS1 empty.

#################################################################
# Optional Startup Script tracking. Normally DBG_ECHO does nothing
if [ -n "$SHELL_STARTUP_DEBUG" ]; then
DBG_ECHO "${DBG_INDENT}~/.bashrc{"
fi

############
# SECTION 1
#
# There are three independent and safe ways to modify the standard
# module setup. Below are three ways from the simplest to hardest.
# a) Use "module save" (see "module help" for details).
# b) Place module commands in ~/.modules
# c) Place module commands in this file inside the if block below.# Note that you should only do one of the above. You do not want
# to override the inherited module environment by having module
# commands outside of the if block[3].

if [ -z "$__BASHRC_SOURCED__" -a "$ENVIRONMENT" != BATCH ]; then
export __BASHRC_SOURCED__=1

##################################################################
# **** PLACE MODULE COMMANDS HERE and ONLY HERE. ****
##################################################################

# module load git
#module purge
module load TACC


module load gcc/6.3.0
module load gcc/7.1.0
module load intel/17.0.4
module load intel/18.0.0
module load intel/18.0.2
# module load parallel-netcdf/4.3.3.1
module load netcdf/4.3.3.1
module load impi/18.0.2

fi

############
# SECTION 2
#
# Please set or modify any environment variables inside the if block
# below. For example, modifying PATH or other path like variables
# (e.g LD_LIBRARY_PATH), the guard variable (__PERSONAL_PATH___)
# prevents your PATH from having duplicate directories on sub-shells.

if [ -z "$__PERSONAL_PATH__" ]; then
export __PERSONAL_PATH__=1

###################################################################
# **** PLACE Environment Variables including PATH here. ****
###################################################################

# export PATH=$HOME/bin:$PATH
export NETCDF=$TACC_NETCDF_DIR
export PATH=$TACC_IMPI_BIN:$PATH
export LD_LIBRARY_PATH=$TACC_IMPI_LIB:$LD_LIBRARY_PATH
export CPATH=$TACC_IMPI_INC:$CPATH



fi

########################
# SECTION 3
#
# Controling the prompt: Suppose you want stampede1(14)$ instead of
# login1.stampede(14)$
#
#if [ -n "$PS1" ]; then
# myhost=$(hostname -f) # get the full hostname
# myhost=${myhost%.tacc.utexas.edu} # remove .tacc.utexas.edu
# first=${myhost%%.*} # get the 1st name (e.g. login1)
# SYSHOST=${myhost#*.} # get the 2nd name (e.g. stampede)
# first5=$(expr substr $first 1 5) # get first 5 character from $first
# if [ "$first5" = "login" ]; then
# num=$(expr $first : '[^0-9]*\([0-9]*\)') # get the number
# HOST=${SYSHOST}$num # HOST -> stampede1
# else
# # first is not login1 so take first letter of system name
# L=$(expr substr $SYSHOST 1 1 | tr '[:lower:]' '[:upper:]')
#
# # If host is c521-101.stampeded then
# HOST=$L$first # HOST -> Sc521-101
# fi
# PS1='$HOST(\#)\$ ' # Prompt either stampede1(14)$ or Sc521-101(14)$
#fi
#####################################################################
# **** Place any else below. ****
#####################################################################

# alias m="more"
# alias bls='/bin/ls' # handy alias for listing a large directory.

alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'

##########
# Umask
#
# If you are in a group that wishes to share files you can use
# "umask". to make your files be group readable. Placing umask here
# is the only reliable place for bash and will insure that it is set
# in all types of bash shells.

# umask 022

###################################
# Optional Startup Script tracking

if [ -n "$SHELL_STARTUP_DEBUG" ]; then
DBG_ECHO "${DBG_INDENT}}"
fi
 

Attachments

  • configure.wrf
    22.2 KB · Views: 63
  • compile_em_real.05.Ritu.SHV.log
    630.8 KB · Views: 58
Top