Monday, September 6, 2021

install Python from source

 Use this to avoid conflict libgcc- in conda


Download 

export PYTHON_VERSION=3.7.5

curl -O https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tgz tar -xvzf Python-${PYTHON_VERSION}.tgz cd Python-${PYTHON_VERSION}

Build and install Python#

Configure, make, and install Python:

./configure \ --prefix=/uhome/p001cao/local/app/python/${PYTHON_VERSION} \ --enable-shared \ --enable-ipv6 \ LDFLAGS=-Wl,-rpath=/uhome/p001cao/local/app/python/${PYTHON_VERSION}/lib,--disable-new-dtags

make -j 8 
make install 



REF; https://docs.rstudio.com/resources/install-python-source/

Monday, November 9, 2020

Job Scheduler for cluster

The software systems responsible for making these clusters of computers work together can be called Distributed Research Management System. The most commonly used ones are SGE, PBS/TORQUE, and SLURM. 

PBS command vs SGE commands

http://www.softpanorama.org/HPC/PBS_and_derivatives/Reference/pbs_command_vs_sge_commands.shtml

SGE to SLURM conversion

I. Sun Grid Engine installation on Centos Server

http://biowiki.org/wiki/index.php/Sun_Grid_Engine

STEP 0: CREATE THE sgeadmin USER
Create a user account named sgeadmin on the head node and all the exec nodes, with the group name being sgeadmin also (although the group name is probably not important, just as long as it is the same on all the nodes). Make sure the user IDs and group IDs are the same for this account across all those nodes (this consistency actually is very important).
## add user
sudo useradd -m sgeadmin -p 123456
## add group
sudo groupadd sgeadmin 
## add user to a group
sudo usermod -a -G sgeadmin sgeadmin 

STEP 1: PREPARE THE FILES
Download https://arc.liv.ac.uk/trac/SGE/
sge-8.0.0a-common.tar.gz   and  sge-8.0.0a-bin-lx-amd64.tar.gz
we will do a local installation on each node- that is, each node with SGE will have its own copy of the SGE binaries and its own local spool directory. This is to minimize NFS traffic, as the NFS will be probably used pretty intensively already for writing output of SGE jobs to the RAID node and for other things.
Use the same $SGE_ROOT=/opt/sge  on each node

http://biowiki.org/wiki/index.php/Sun_Grid_Engine

STEP 2: PREPARE THE MASTER/SUBMIT/ADMINISTRATION HOST (master node)
read README.BUILD
# Install libhwloc-dev deb package:
change to root: sudo su -
sudo apt-get update && sudo apt-get upgrade -y
sudo apt-get install libhwloc-dev 

2.1. Build the dependencies: 
change to root: 
untar 2 files into /opt/sge
sudo mkdir /opt/sge
tar xvf sge-8.0.0a-common.tar.gz --directory /opt/sge
tar xvf sge-8.0.0a-bin-lx-amd64.tar.gz --directory /opt/sge

cd /home/canlab/wSourceCode/sge-8.1.9/source
sh scripts/bootstrap.sh -no-java -no-jni 
./aimk -no-java -no-jni 


2.2 The Configuration File: SGE provides automated installation scripts that will read options you set in your configuration file and perform an installation using them.. We are going to use a configuration file based on the template in wSourceCode/sge-8.1.9/source/dist/util/install_modules/inst_template.conf
Make a copy of the template and fill out the options tha_configuration.conf,
SGE_ROOT="/opt/sge"
SGE_JMX_SSL_CLIENT="false"
CELL_NAME="default"
ADMIN_USER=canlab
QMASTER_SPOOL_DIR=$SGE_ROOT/$CELL_NAME/spool/qmaster
EXECD_SPOOL_DIR=$SGE_ROOT/$CELL_NAME/spool
ADMIN_HOST_LIST="canHead"
SUBMIT_HOST_LIST="canHead"
EXEC_HOST_LIST=`canHead`
EXECD_SPOOL_DIR_LOCAL="$SGE_ROOT/$CELL_NAME/spool/execd"
ADMIN_MAIL="none"


Install:
log into the head node as root, add the SGE_QMASTER_PORT and SGE_EXECD_PORT (which should be two different ports set in conf. file) to your /etc/services file.
 # SUN GRID ENGINE
  sge_qmaster	  6444/tcp	 # for Sun Grid Engine (SGE) qmaster daemon
  sge_execd	  6445/tcp	 # for Sun Grid Engine (SGE) exec daemon
execute the inst_sge script on the head node with the parameters -m (install Master Host, which is also the implied Submit and Administration Host), -x (install Execution Hosts), and -auto (read settings from the configuration file). In our case, this will be:

export SGE_ROOT=/opt/sge
cd /opt/sge
./inst_sge -m -x -auto /opt/sge/util/install_modules/tha_configuration.conf






II. Sun Grid Engine installation on Ubuntu Server

II.1. Try this for SGE: https://tkainrad.dev/posts/copy-paste-ready-instructions-to-set-up-1-node-clusters/
* gain root permissions On Ubuntu: sudo -i (or just put sudo before commands)
* Install Dependencies:
sudo apt-get update -y \
&& sudo apt-get install -y sudo bsd-mailx tcsh db5.3-util libhwloc5 libmunge2 libxm4 libjemalloc1 xterm openjdk-8-jre-headless \
&& sudo apt-get clean \
&& sudo rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

1. create a new folder & download source-code:
sudo mkdir -p /opt/sge/installfolder
export INSTALLFOLDER=/opt/sge/installfolder
##--
cd $INSTALLFOLDER 
sudo wget https://arc.liv.ac.uk/downloads/SGE/releases/8.1.9/sge-common_8.1.9_all.deb
sudo wget https://arc.liv.ac.uk/downloads/SGE/releases/8.1.9/sge-doc_8.1.9_all.deb
sudo wget https://arc.liv.ac.uk/downloads/SGE/releases/8.1.9/sge_8.1.9_amd64.deb
sudo dpkg -i --force-all  ./*.deb

2. Setup files:
download the following 4 files (sge_init.shsge_auto_install.confsge_hostgrp.confsge_exec_host.conf) and place them also into: /opt/sge/installfolder:
sudo wget https://tkainrad.dev/other/sge_init.sh

those scripts and configuration files automatically perform setting. 
# Edit file sge_auto_instll.conf
SGE_ROOT="/opt/sge"
SGE_CLUSTER_NAME="docker-sge"
CELL_NAME="default"
# Edit file sge_auto_instll.conf
export SGE_HOST='cat /opt/sge/default/common/act_qmaster'
/etc/init.d/sgemaster.docker-sge restart
/etc/init.d/sgeexecd.docker-sge restart

#After the download, we need to set some environment variables in the current shell:
export SGE_ROOT=/opt/sge 
export SGE_CELL=default

#We also need to set a new profile.d config via
sudo ln -s $SGE_ROOT/$SGE_CELL/common/settings.sh /etc/profile.d/sge_settings.sh

3. Install
# execute the following to install SGE and perform setup operations:
useradd -r -m -U -G sudo -d /home/sgeuser -s /bin/bash -c "Docker SGE user" sgeuser
cd $SGE_ROOT 
##--
sudo ./inst_sge -m -x -s -auto $INSTALLFOLDER/sge_auto_install.conf \
&& sleep 10 \
&& /etc/init.d/sgemaster.docker-sge restart \
&& /etc/init.d/sgeexecd.docker-sge restart \
&& sed -i "s/HOSTNAME/`hostname`/" $INSTALLFOLDER/sge_exec_host.conf \
&& sed -i "s/HOSTNAME/`hostname`/" $INSTALLFOLDER/sge_hostgrp.conf \
&& /opt/sge/bin/lx-amd64/qconf -Me $INSTALLFOLDER/sge_exec_host.conf 


## Note: to reinstall, we need to delete these files in:  /etc/init.d
sudo rm -r -f sgemaster.docker-sge
sudo rm -r -f sgeexecd.docker-sge
sudo rm -r -f /opt/sge/default 

4. Add users
# we still need to add users to the sgeusers group, which was defined in the sge_hostgrp.conf file you just applied. Only users from this group are allowed to submit jobs. Therefore, we run the following:
/opt/sge/bin/lx-amd64/qconf -au <USER> sgeusers
sudo /opt/sge/bin/lx-amd64/qconf -au canlab sgeusers
/opt/sge/bin/lx-amd64/qconf -au hung sgeusers



II.2 Another way:

https://www.socher.org/index.php/Main/HowToInstallSunGridEngineOnUbuntu
https://peteris.rocks/blog/sun-grid-engine-installation-on-ubuntu-server/

1. On Master Node
(install Master Host, which is also the implied Submit and Administration Host)
(install Execution Hosts)
https://gist.github.com/asadharis/9d14da97d9ad1f8eccc36dc14390e4e0
git clone https://gist.github.com/9d14da97d9ad1f8eccc36dc14390e4e0.git sgeSetup/
cd sgeSetup
sudo chmod +x install_sge.sh loop.sh sleep.sh
./install_sge.sh
./loop.sh

2. On woker Nodes


3. Unistall sge
https://howtoinstall.co/en/ubuntu/xenial/gridengine-master?action=remove
sudo apt-get autoremove --purge gridengine-master


II.3 Configure SGE

https://southgreenplatform.github.io/trainings/hpc/sgeinstallation/



B. PBS / Torque

RHEL, CentOS, and Scientific Linuxyum install
# Ubuntu: sudo apt-get install

I. PBS on Ubuntu

http://docs.adaptivecomputing.com/torque/5-0-0/Content/topics/torque/1-installConfig/installing.htm

https://pmateusz.github.io/linux/torque/2017/03/25/torque-installation-on-ubuntu.html

http://docs.adaptivecomputing.com/torque/5-1-3/Content/topics/hpcSuiteInstall/manual/1-installing/installingTorque.htm

https://tkainrad.dev/posts/copy-paste-ready-instructions-to-set-up-1-node-clusters/#pbs--torque

1. installing the relevant packages required to run TORQUE 5.1.1:
sudo apt-get install libboost-all-dev libssl-dev libxml2-dev tcl8.6-dev tk8.6-dev libhwloc-dev cpuset 

2. Download & ínstall TORQUE 
https://ubuntuforums.org/showthread.php?t=289767
## from github  (in use)
git clone https://github.com/adaptivecomputing/torque.git -b 6.1.1 torque-6.1.1 
cd torque-6.1.1
./autogen.sh

## not use github
wget --quiet http://wpfilebase.s3.amazonaws.com/torque/torque-6.1.0.tar.gz 
tar -xvzf torque-6.1.0.tar.gz
#############

./configure --prefix=/opt/torque --disable-werror



Thursday, September 24, 2020

Ubuntu cluster

I. Ubuntu server

change to root: sudo su -
sudo apt-get update && sudo apt-get upgrade -y

1a. Create user
https://askubuntu.com/questions/410244/a-command-to-list-all-users-and-how-to-add-delete-modify-users
https://www.cyberciti.biz/faq/create-a-user-account-on-ubuntu-linux/

## add user
  • -s /bin/bash – Set /bin/bash as login shell of the new account
  • -d /home/vivek/ – Set /home/vivek/ as home directory of the new Ubuntu account
  • -m – Create the user’s home directory
  • -G sudo – Make sure vivek user can sudo i.e. give admin access to the new account
sudo useradd -m hung -s /bin/bash
sudo passwd hung

## add group
sudo groupadd sgeadmin 

## add user to a group
sudo usermod -a -G sgeadmin sgeadmin 

## delete user
kill -15 1358                           # kill process
sudo deluser --remove-home sgeadmin
sudo userdel -r hung

##Delete folder/file
sudo rm -r -f /path/

1b. Unistall app
https://howtoinstall.co/en/ubuntu/xenial/gridengine-master?action=remove
sudo apt-get autoremove --purge environment-modules

1c. Install gcc
sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt-get update 
sudo apt install  gcc-10  g++-10  gfortran


1. Insall Modules
./configure --prefix=/opt/app/Modules --modulefilesdir=/etc/modulefiles
# or use sudo apt, defaut path is: /usr/share/modules/modulefiles.  
# https://modules.readthedocs.io/en/latest/INSTALL.html#requirements
sudo apt-get install tcl environment-modules
sudo reboot

## copy folder contain modulefiles to a public folder (use root)
cp -R <source_folder> <destination_folder>
sudo cp -R    /home/canlab/wSourceCode/1moduleFiles/lammps    /usr/share/modules/modulefiles 
sudo cp -R    /home/canlab/wSourceCode/1moduleFiles/ompi   /usr/share/modules/modulefiles 



2. install cuda
https://medium.com/@exesse/cuda-10-1-installation-on-ubuntu-18-04-lts-d04f89287130
https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html
(defaul path when install with sudo apt: usr/local)
sudo apt install cuda-10-2
# use
export PATH=/usr/local/cuda-10.2/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-10.2/lib64:$LD_LIBRARY_PATH

#test
nvcc --version                 # cuda version
nvidia -smi # nvidia driver


2. OpenMPI + gcc +gpu
#GCC
sudo add-apt-repository ppa:ubuntu-toolchain-r/test 
sudo apt-get update 
sudo apt install gcc-10 g++-10 gfortran
##
tar xvf openmpi-4.1.0rc3.tar.gz
cd openmpi-4.1.0rc3
mkdir buildGCC-cuda && cd buildGCC-cuda
##
export myCUDA=/usr/local/cuda-10.2
../configure CC=gcc CXX=g++ FC=gfortran F77=gfortran \
--with-sge --without-verbs --without-ucx --with-cuda=${myCUDA} \
--prefix=/opt/app/openmpi/4.1.0-gcc7.5-cuda10.2
sudo make install

## use 
export PATH=/opt/app/openmpi/4.1.0-gcc7.5-cuda10.2/bin:$PATH
export LD_LIBRARY_PATH=/opt/app/openmpi/4.1.0-gcc7.5-cuda10.2/lib:$LD_LIBRARY_PATH

4. install cmake  (cannot install with sudo, only 3.10)
sudo apt-get install libssl-dev
wget https://cmake.org/files/v3.18/cmake-3.18.3.tar.gz
tar zxvf cmake-3.18.3.tar.gz
cd cmake-3.18.3
./configure --prefix=/opt/app/cmake-3.18.3
make 
sudo make install

5. Compile lammps
## GPU arch: https://en.wikipedia.org/wiki/CUDA#GPUs_supported
### we have TITAN RTX with Turing architecture: https://www.nvidia.com/en-us/deep-learning-ai/products/titan-rtx/
## Libs
sudo apt install zlib1g-dev ocl-icd-opencl-dev  pkg-config
sudo apt-get install libblas-dev liblapack-dev libgsl-dev    #for plumed

##
tar -xvf lammps-patch_18Sep2020.tar.gz
cd lammps-patch_18Sep2020
mkdir build-cuda && cd build-cuda
##
module load cmake-3.18.3      
module load ompi/4.1.0-gcc7.5-cuda10.2
module load cuda-10.2
#--
export PATH=$PATH:/opt/app/openmpi/4.1.0-gcc7.5-cuda10.2/bin
export CC=mpicc
export CXX=mpic++
export FORTRAN=mpifort
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/anaconda3/lib
cmake ../cmake -C ../cmake/presets/all_on.cmake \
-DLAMMPS_EXCEPTIONS=yes -DBUILD_MPI=yes -DBUILD_OMP=yes -DLAMMPS_MACHINE=mpi \
-DPKG_USER-OMP=yes -DPKG_USER-INTEL=no -DPKG_KOKKOS=yes \
 -DPKG_GPU=yes -DGPU_API=cuda -D GPU_ARCH=sm_75 \
-DPKG_USER-SMD=yes -DDOWNLOAD_EIGEN3=yes -DDOWNLOAD_VORO=yes \
-DPKG_KIM=no -DDOWNLOAD_KIM=no -DPKG_LATTE=no -DPKG_MSCG=no -DPKG_USER-ATC=no -DPKG_USER-MESONT=no  \
-DPKG_USER-ADIOS=no -DPKG_USER-NETCDF=no -DPKG_USER-QUIP=no -DPKG_USER-SCAFACOS=no \
-DPKG_USER-VTK=no -DPKG_USER-H5MD=no \
-DPKG_USER-PLUMED=yes -DDOWNLOAD_PLUMED=yes \
-DCMAKE_C_COMPILER=mpicc -DCMAKE_CXX_COMPILER=mpic++ -DCMAKE_Fortran_COMPILER=mpifort \
-DCMAKE_INSTALL_PREFIX=/opt/app/lammps/master-gpu
sudo make install

Wednesday, September 9, 2020

Thursday, August 20, 2020

installing Libtorch

https://pytorch.org/cppdocs/installing.html

git clone --branch  master https://github.com/pytorch/pytorch  pytorch-1.6
cd pytorch-1.6
# git checkout v1.6.0         # v1.5.1     
mkdir build
cd build
#--
# module load tool_dev/python37                    # pip3 install pyyaml glog caffe
module load  conda/conda3                             # conda install pyyaml glog caffe
module load mpi/ompi4.0.5-gcc10.2
module load tool_dev/cmake-3.18.0
export PATH=$PATH:/home1/p001cao/local/app/openmpi/4.0.5-gcc10.2/bin
##--
cmake ../  -DUSE_MPI=ON \
-DPYTHON_EXECUTABLE:FILEPATH=/home1/p001cao/local/app/miniconda3/bin/python \
-DCMAKE_PREFIX_PATH=/home1/p001cao/local/app/pytorch-1.6 


make -j 8
make -k check
make install

## configure options
-DCMAKE_CXX_FLAGS="-static-libstdc++" \
-DPYTHON_EXECUTABLE:FILEPATH=/home1/p001cao/local/app/tool_dev/python37/bin/python3


Ref:

Thursday, July 2, 2020

openMPI understanding

https://www.open-mpi.org/doc/v4.0/man1/mpirun.1.php

B1. General run-time tuning

https://tinyurl.com/yabwy5en
MCA: The Modular Component Architecture (MCA) is the backbone for much of Open MPI's functionality. It is a series of frameworks, components, and modules that are assembled at run-time to create an MPI implementation.

MCA parameters are the basic unit of run-time tuning for Open MPI. They are simple "key = value" pairs that are used extensively throughout the code base.

btl option:
export OMPI_MCA_btl_openib_allow_ib=1
export OMPI_MCA_btl=self,vader,openib
export OMPI_MCA_btl=self,vader,uct
export OMPI_MCA_btl=^tcp


B2. Set value of MCA parameters?
https://docs.oracle.com/cd/E19708-01/821-1319-10/mca-params.html
There are 3 ways to set MCA parameters:
1. Command line: The highest-precedence method is setting MCA parameters, the format used on the command line is "--mca <param_name> <value>"
$ mpirun --mca mpi_show_handle_leaks 1 -np 4 a.out
$ mpirun --mca param "value with multiple words" ...

2. Environment variable: OMPI_MCA_<param_name>
$ export OMPI_MCA_mpi_show_handle_leaks=1
$ mpirun -np 4 a.out
3. Aggregate MCA parameter files:
Q11: https://www.open-mpi.org/faq/?category=tuning

B3. Setting MCA Parameters
Q13: https://www.open-mpi.org/faq/?category=tuning
Each MCA framework has a top-level MCA parameter that can be used to include or exclude 
components from a given run.
# Tell Open MPI to exclude the tcp and openib BTL components and implicitly include all 
the rest $ mpirun --mca btl ^tcp,openib ... # Tell Open MPI to include *only* the 
components listed here and implicitly ignore all the 
# rest (i.e., the loopback, shared memory, and OpenFabrics (a.k.a., "OpenIB") MPI 
point-to-point components): $ mpirun --mca btl self,sm,openib ...
Note that ^ can only be the prefix of the entire value because the inclusive and exclusive 
behavior are mutually exclusive. Specifically, since the exclusive behavior means "use all 
components except these", it does not make sense to mix it with the inclusive behavior of 
not specifying it (i.e., "use all of these components"). 
https://www.open-mpi.org/doc/v4.0/man1/mpirun.1.php#sect20
MPI shared memory communications
The vader BTL is a low-latency, high-bandwidth mechanism for transferring data between two processes via shared memory. This BTL can only be used between processes executing on the same node.
Beginning with the v1.8 series, the vader BTL replaces the sm BTL
Shows all the MCA parameters for all BTL components that ompi_info finds.
# to show all MCA parameters: ompi_info --param btl all
$ ompi_info --param btl all --level 9
MCA btl: self (MCA v2.1.0, API v3.1.0, Component v4.0.3) MCA btl: openib (MCA v2.1.0, API v3.1.0, Component v4.0.3) MCA btl: tcp (MCA v2.1.0, API v3.1.0, Component v4.0.3) MCA btl: vader (MCA v2.1.0, API v3.1.0, Component v4.0.3)

NOTE: the UCX PML is now the preferred method of InfiniBand support in openMPI 4.0.x (btl is built-in methods)

Exported Environment Variables

All environment variables that are named in the form OMPI_* will automatically be exported to new processes on the local and remote nodes. Environmental parameters can also be set/forwarded to the new processes using the MCA parameter mca_base_env_list. The -x option to mpirun has been deprecated, but the syntax of the MCA param follows that prior example. While the syntax of the -x option and MCA param allows the definition of new variables, note that the parser for these options are currently not very sophisticated - it does not even understand quoted values. Users are advised to set variables in the environment and use the option to export them; not to define them.

Setting MCA Parameters

The -mca switch allows the passing of parameters to various MCA (Modular Component Architecture) modules. MCA modules have direct impact on MPI programs because they allow tunable parameters to be set at run time (such as which BTL communication device driver to use, what parameters to pass to that BTL, etc.).
The -mca switch takes two arguments: <key> and <value>. The <key> argument generally specifies which MCA module will receive the value. For example, the <key> "btl" is used to select which BTL to be used for transporting MPI messages. The <value> argument is the value that is passed. For example:
mpirun -mca btl tcp,self -np 1 foo
Tells Open MPI to use the "tcp" and "self" BTLs, and to run a single copy of "foo" an allocated node.
mpirun -mca btl self -np 1 foo
Tells Open MPI to use the "self" BTL, and to run a single copy of "foo" an allocated node.
The -mca switch can be used multiple times to specify different <key> and/or <value> arguments. If the same <key> is specified more than once, the <value>s are concatenated with a comma (",") separating them.
Note that the -mca switch is simply a shortcut for setting environment variables. The same effect may be accomplished by setting corresponding environment variables before running mpirun. The form of the environment variables that Open MPI sets is:
OMPI_MCA_<key>=<value>
Thus, the -mca switch overrides any previously set environment variables. The -mca settings similarly override MCA parameters set in the $OPAL_PREFIX/etc/openmpi-mca-params.conf or $HOME/.openmpi/mca-params.conf file.
Unknown <key> arguments are still set as environment variable -- they are not checked (by mpirun) for correctness. Illegal or incorrect <value> arguments may or may not be reported -- it depends on the specific MCA module.
To find the available component types under the MCA architecture, or to find the available parameters for a specific component, use the ompi_info command. See the ompi_info(1) man page for detailed information on the command.


B4.  What is processor affinity? 
Open MPI supports processor affinity on a variety of systems through process binding, in which each MPI process, along with its threads, is "bound" to a specific subset of processing resources (cores, sockets, etc.). 
Affinity can improve performance by inhibiting excessive process movement — for example, away from "hot" caches or NUMA memory. Judicious bindings can improve performance by reducing resource contention (by spreading processes apart from one another) or improving interprocess communications (by placing processes close to one another).
Note that processor affinity probably should not be used when a node is over-subscribed (i.e., more processes are launched than there are processors). 
memory affinity? Simply: some memory will be faster to access (for a given process) than others.
see if your system is supported processor affinity?/memory affinity?
$ ompi_info | grep hwloc
         MCA hwloc: hwloc191 (MCA v2.0, API v2.0, Component v1.8.4)

B5 tell Open MPI to use processor and/or memory affinity
Q19 https://www.open-mpi.org/faq/?category=tuning
  • --byslot: Alias for --bycore.
  • --bycore: When laying out processes, put sequential MPI processes on adjacent processor cores. *(Default)*
  • --bysocket: When laying out processes, put sequential MPI processes on adjacent processor sockets.
  • --bynode: When laying out processes, put sequential MPI processes on adjacent nodes.
The use of processor and memory affinity evolved rapidly, starting with Open MPI version:
B6 Mapping, Ranking, and Binding: Oh My! Open MPI employs a three-phase procedure for assigning process locations and ranks:
mapping
Assigns a default location to each process
ranking
Assigns an MPI_COMM_WORLD rank value to each process
binding
Constrains each process to run on specific processors
The mapping step is used to assign a default location to each process based on the mapper being employed. Mapping by slot, node, and sequentially results in the assignment of the processes to the node level. In contrast, mapping by object, allows the mapper to assign the process to an actual object on each node.
Note: the location assigned to the process is independent of where it will be bound - the assignment is used solely as input to the binding algorithm.
The mapping of process processes to nodes can be defined not just with general policies but also, if necessary, using arbitrary mappings that cannot be described by a simple policy. One can use the "sequential mapper," which reads the hostfile line by line, assigning processes to nodes in whatever order the hostfile specifies. Use the -mca rmaps seq option. For example, using the same hostfile as before:
mpirun -hostfile myhostfile -mca rmaps seq ./a.out
will launch three processes, one on each of nodes aa, bb, and cc, respectively. The slot counts don’t matter; one process is launched per line on whatever node is listed on the line.
Another way to specify arbitrary mappings is with a rankfile, which gives you detailed control over process binding as well. Rankfiles are discussed below.
The second phase focuses on the ranking of the process within the job’s MPI_COMM_WORLD. Open MPI separates this from the mapping procedure to allow more flexibility in the relative placement of MPI processes. This is best illustrated by considering the following two cases where we used the —map-by ppr:2:socket option:
node aa node bb
rank-by core 0 1 ! 2 3 4 5 ! 6 7
rank-by socket 0 2 ! 1 3 4 6 ! 5 7
rank-by socket:span 0 4 ! 1 5 2 6 ! 3 7
Ranking by core and by slot provide the identical result - a simple progression of MPI_COMM_WORLD ranks across each node. Ranking by socket does a round-robin ranking within each node until all processes have been assigned an MCW rank, and then progresses to the next node. Adding the span modifier to the ranking directive causes the ranking algorithm to treat the entire allocation as a single entity - thus, the MCW ranks are assigned across all sockets before circling back around to the beginning.
The binding phase actually binds each process to a given set of processors. This can improve performance if the operating system is placing processes suboptimally. For example, it might oversubscribe some multi-core processor sockets, leaving other sockets idle; this can lead processes to contend unnecessarily for common resources. Or, it might spread processes out too widely; this can be suboptimal if application performance is sensitive to interprocess communication costs. Binding can also keep the operating system from migrating processes excessively, regardless of how optimally those processes were placed to begin with.
The processors to be used for binding can be identified in terms of topological groupings - e.g., binding to an l3cache will bind each process to all processors within the scope of a single L3 cache within their assigned location. Thus, if a process is assigned by the mapper to a certain socket, then a —bind-to l3cache directive will cause the process to be bound to the processors that share a single L3 cache within that socket.
Alternatively, processes can be assigned to processors based on their local rank on a node using the --bind-to cpu-list:ordered option with an associated --cpu-list "0,2,5". In this example, the first process on a node will be bound to cpu 0, the second process on the node will be bound to cpu 2, and the third process on the node will be bound to cpu 5. --bind-to will also accept cpulist:ortered as a synonym to cpu-list:ordered. Note that an error will result if more processes are assigned to a node than cpus are provided.
To help balance loads, the binding directive uses a round-robin method when binding to levels lower than used in the mapper. For example, consider the case where a job is mapped to the socket level, and then bound to core. Each socket will have multiple cores, so if multiple processes are mapped to a given socket, the binding algorithm will assign each process located to a socket to a unique core in a round-robin manner.
Alternatively, processes mapped by l2cache and then bound to socket will simply be bound to all the processors in the socket where they are located. In this manner, users can exert detailed control over relative MCW rank location and binding.
Finally, --report-bindings can be used to report bindings.
As an example, consider a node with two processor sockets, each comprising four cores. We run mpirun with -np 4 --report-bindings and the following additional options:
% mpirun ... --map-by core --bind-to core [...] ... binding child [...,0] to cpus 0001 [...] ... binding child [...,1] to cpus 0002 [...] ... binding child [...,2] to cpus 0004 [...] ... binding child [...,3] to cpus 0008
% mpirun ... --map-by socket --bind-to socket [...] ... binding child [...,0] to socket 0 cpus 000f [...] ... binding child [...,1] to socket 1 cpus 00f0 [...] ... binding child [...,2] to socket 0 cpus 000f [...] ... binding child [...,3] to socket 1 cpus 00f0
% mpirun ... --map-by core:PE=2 --bind-to core [...] ... binding child [...,0] to cpus 0003 [...] ... binding child [...,1] to cpus 000c [...] ... binding child [...,2] to cpus 0030 [...] ... binding child [...,3] to cpus 00c0
% mpirun ... --bind-to none
Here, --report-bindings shows the binding of each process as a mask. In the first case, the processes bind to successive cores as indicated by the masks 0001, 0002, 0004, and 0008. In the second case, processes bind to all cores on successive sockets as indicated by the masks 000f and 00f0. The processes cycle through the processor sockets in a round-robin fashion as many times as are needed. In the third case, the masks show us that 2 cores have been bound per process. In the fourth case, binding is turned off and no bindings are reported.
Open MPI’s support for process binding depends on the underlying operating system. Therefore, certain process binding options may not be available on every system.
Process binding can also be set with MCA parameters. Their usage is less convenient than that of mpirun options. On the other hand, MCA parameters can be set not only on the mpirun command line, but alternatively in a system or user mca-params.conf file or as environment variables, as described in the MCA section below. Some examples include:
mpirun option MCA parameter key value
--map-by core rmaps_base_mapping_policy core --map-by socket rmaps_base_mapping_policy socket --rank-by core rmaps_base_ranking_policy core --bind-to core hwloc_base_binding_policy core --bind-to socket hwloc_base_binding_policy socket --bind-to none hwloc_base_binding_policy none

Portable Hardware Locality(HWLOC) (included in OpenMPI)


The Portable Hardware Locality (hwloc) software package provides a portable abstraction (across OS, versions, architectures, ...) of the hierarchical topology of modern architectures, including NUMA memory nodes, sockets, shared caches, cores and simultaneous multithreading. It also gathers various system attributes such as cache and memory information as well as the locality of I/O devices such as network interfaces, InfiniBand HCAs or GPUs.

MPI InfiniBand, RoCE, and iWARP communications

https://www.open-mpi.org/faq/?category=openfabrics#ib-components

support for high-speed interconnect networks

https://www.open-mpi.org/faq/?category=openfabrics#run-ucx
https://www.open-mpi.org/faq/?category=building#build-p2p

Compare transport mechanism: https://blogs.cisco.com/performance/the-vader-shared-memory-transport-in-open-mpi-now-featuring-3-flavors-of-zero-copy


II. OMPI_MCA_btl

ompi_info --param btl all MCA btl: self (MCA v2.1.0, API v3.1.0, Component v4.1.0) MCA btl: openib (MCA v2.1.0, API v3.1.0, Component v4.1.0) MCA btl: tcp (MCA v2.1.0, API v3.1.0, Component v4.1.0) MCA btl: vader (MCA v2.1.0, API v3.1.0, Component v4.1.0)

What is the vader BTL?

The vader BTL is a low-latency, high-bandwidth mechanism for transferring data between two processes via shared memory. This BTL can only be used between processes executing on the same node.

Beginning with the v1.8 series, the vader BTL replaces the sm BTL


III. Using UCX with OpenMPI

http://openucx.github.io/ucx/faq.html
#1.a. See all available transports of OMPI:
module load openmpi
ompi_info |grep btl
MCA btl: self (MCA v2.1.0, API v3.1.0, Component v4.0.3)
MCA btl: openib (MCA v2.1.0, API v3.1.0, Component v4.0.3) 
MCA btl: tcp (MCA v2.1.0, API v3.1.0, Component v4.0.3) 
MCA btl: vader (MCA v2.1.0, API v3.1.0, Component v4.0.3)
#1.b. See all available transports/Devices of UCX:
module load openmpi
ucx_info -d
# Transport: tcp
#   Device: eth0
#      capabilities:
#            bandwidth: 113.16 MB/sec
#   Device: eth1
#      capabilities:
#            bandwidth: 113.16 MB/sec
# Device: ib0
# capabilities: # bandwidth: 4457.00 MB/sec
# Transport: self # Device: self # capabilities: # bandwidth: 6911.00 MB/sec # Transport: mm # Device: sysv # capabilities: # bandwidth: 12179.00 MB/sec
# Device: posix # capabilities: # bandwidth: 12179.00 MB/sec
#   Transport: ud   (or ud_verbs)
#   Device: mlx4_0:1
#      capabilities:
#            bandwidth: 3774.15 MB/sec

#   Transport: rc   (or rc_verbs)
#   Device: mlx4_0:1
#      capabilities:
#            bandwidth: 3774.15 MB/sec
#   Transport: cm
#   Device: mlx4_0:1
#      capabilities:
#            bandwidth: 2985.42 MB/sec

#   Transport: knem
#   Device: knem
#      capabilities:
#            bandwidth: 13862.00 MB/sec

2.a. Force to use UCX

export OMPI_MCA_btl=^vader,tcp,openib,uct
export OMPI_MCA_pml=ucx

2.b. Choose a specific transport/device

export UCX_TLS=self,mm,knem,sm,ud,rc,tcp
export UCX_NET_DEVICES=mlx4_0:1,ib0