ejspeiro |
03-19-2013 03:48 PM |
Advantages of using mpicc versus gcc with explicit linkage
This thread is composed of 2 questions:
Question number 1:
This is a makefile I just wrote:
Code:
IMPI = /opt/intel/impi/3.2.0.011/include64/
ABLAS = $(HOME)/libraries/BLAS/libblas.a
ALAPACK = $(HOME)/libraries/lapack-3.4.1/liblapack.a
ASCALAPACK = $(HOME)/libraries/scalapack-2.0.2/libscalapack.a
LMPI = -L/opt/intel/impi/3.2.0.011/lib64 -lmpi -lmpigf -lmpigi
HOSTF = mpd.hosts
default: blogs
# Linking stage:
blogs: blogs.o
gfortran -g -o blogs blogs.o $(ASCALAPACK) $(ALAPACK) $(ABLAS) \
$(LMPI) -lrt -lpthread -ldl
# Compiling stage:
blogs.o: blogs.c
gcc -I$(IMPI) -c -DAdd_ -g -Wall -Werror blogs.c
# Utilities:
clean:
rm -f *.o blogs
clear
run:
mpirun -r ssh -f $(HOSTF) -np $(np) `pwd`/blogs
debug:
mpirun -r ssh -f $(HOSTF) -l -np 1 xterm -sb -wf -e gdb -x blogs.gdbc\
`pwd`/blogs
What is wrong with it?!? It won't rebuild after I modify my source code.
Question number 2: General culture!
Is there any advantage/disadvantage in compiling MPI code, as in the previous Makefile, that is, using the gcc compilers with all of the linkage information, rather than using mpicc?
I know that:
Code:
[ejspeiro@node01 scalapack-ex07]$ mpicc -show
gcc -I/opt/intel/impi/3.2.0.011/include64 -L/opt/intel/impi/3.2.0.011/lib64 -Xlinker --enable-new-dtags -Xlinker -rpath -Xlinker $libdir -Xlinker -rpath -Xlinker /opt/intel/mpi-rt/3.2 -lmpi -lmpigf -lmpigi -lrt -lpthread -ldl
So I have parsed that and eliminated everything that I THOUGHT was redundant? Was I 0k?
This is the kind of question , which I expect helps people (and myself) to learn more! :)
Thanks! :)
|