Which MPI implementation?
MVAPICH2 (MPI-3 over InfiniBand) is an MPI-3 implementation based on MPICH ADI3 layer.
wget http://mvapich.cse.ohio-state.edu/download/mvapich/mv2/mvapich2-2.3.2.tar.gz
tar -xzf mvapich2-2.3.2.tar.gz
module load conda/py37mvapichSupp
cd mvapich2-2.3.2
./autogen.sh
Which MPI implementation?
MVAPICH2 (MPI-3 over InfiniBand) is an MPI-3 implementation based on MPICH ADI3 layer.
wget http://mvapich.cse.ohio-state.edu/download/mvapich/mv2/mvapich2-2.3.2.tar.gz
tar -xzf mvapich2-2.3.2.tar.gz
module load conda/py37mvapichSupp
cd mvapich2-2.3.2
./autogen.sh
I. install MVAPICH2 + GCC (USC)
1. Supporting:
module load conda/conda3
conda create -n py37mvapichSupp python=3.7
source activate py37mvapichSupp
source activate py37mvapichSupp
conda install autoconf automake
conda install -c sas-institute libnuma
conda install -c conda-forge lld=9.0.1 binutils # llmv & gold linker
#--
prepend-path PKG_CONFIG_PATH $topdir/lib/pkgconfig
2. Configuration
#2.2. USC 2:
module load compiler/gcc-9.2.0
module load conda/py37mvapichSupp # to use gold linker or lld linker
./configure CC=gcc CXX=g++ FC=gfortran F77=gfortran LDFLAGS="-fuse-ld=gold" \ --with-device=ch3:mrail --with-rdma=gen2 --enable-hybrid \ --prefix=/home1/p001cao/local/app/mvapich2/2.3.2-gcc9.2.0
http://gridscheduler.sourceforge.net/howto/mvapich/MVAPICH_Integration.html
The job example 'mvapich.sh' starts the xhpl' program. Please note that a MPI job that has to start 'mpirun_rsh' with the options "-np $NSLOTS" to start the job with the correct number of slots ($NSLOTS is set by Grid Engine).
To pass information where to start the MPI tasks one has to pass "-hostfile $TMPDIR/machines" as the second argument.
Additionally, for tight integration remember to use "-rsh " and optionally, you can use "-nowd" to prevent mvapich to 'cd $wd' in the remote hots.
This leaves SGE in charge of the working directory.
The job example 'mvapich.sh' starts the xhpl' program. Please note that a MPI job that has to start 'mpirun_rsh' with the options "-np $NSLOTS" to start the job with the correct number of slots ($NSLOTS is set by Grid Engine).
To pass information where to start the MPI tasks one has to pass "-hostfile $TMPDIR/machines" as the second argument.
Additionally, for tight integration remember to use "-rsh " and optionally, you can use "-nowd" to prevent mvapich to 'cd $wd' in the remote hots.
This leaves SGE in charge of the working directory.
No comments:
Post a Comment