Thursday, July 2, 2020

openMPI understanding

https://www.open-mpi.org/doc/v4.0/man1/mpirun.1.php

B1. General run-time tuning

https://tinyurl.com/yabwy5en
MCA: The Modular Component Architecture (MCA) is the backbone for much of Open MPI's functionality. It is a series of frameworks, components, and modules that are assembled at run-time to create an MPI implementation.

MCA parameters are the basic unit of run-time tuning for Open MPI. They are simple "key = value" pairs that are used extensively throughout the code base.

btl option:
export OMPI_MCA_btl_openib_allow_ib=1
export OMPI_MCA_btl=self,vader,openib
export OMPI_MCA_btl=self,vader,uct
export OMPI_MCA_btl=^tcp


B2. Set value of MCA parameters?
https://docs.oracle.com/cd/E19708-01/821-1319-10/mca-params.html
There are 3 ways to set MCA parameters:
1. Command line: The highest-precedence method is setting MCA parameters, the format used on the command line is "--mca <param_name> <value>"
$ mpirun --mca mpi_show_handle_leaks 1 -np 4 a.out
$ mpirun --mca param "value with multiple words" ...

2. Environment variable: OMPI_MCA_<param_name>
$ export OMPI_MCA_mpi_show_handle_leaks=1
$ mpirun -np 4 a.out
3. Aggregate MCA parameter files:
Q11: https://www.open-mpi.org/faq/?category=tuning

B3. Setting MCA Parameters
Q13: https://www.open-mpi.org/faq/?category=tuning
Each MCA framework has a top-level MCA parameter that can be used to include or exclude 
components from a given run.
# Tell Open MPI to exclude the tcp and openib BTL components and implicitly include all 
the rest $ mpirun --mca btl ^tcp,openib ... # Tell Open MPI to include *only* the 
components listed here and implicitly ignore all the 
# rest (i.e., the loopback, shared memory, and OpenFabrics (a.k.a., "OpenIB") MPI 
point-to-point components): $ mpirun --mca btl self,sm,openib ...
Note that ^ can only be the prefix of the entire value because the inclusive and exclusive 
behavior are mutually exclusive. Specifically, since the exclusive behavior means "use all 
components except these", it does not make sense to mix it with the inclusive behavior of 
not specifying it (i.e., "use all of these components"). 
https://www.open-mpi.org/doc/v4.0/man1/mpirun.1.php#sect20
MPI shared memory communications
The vader BTL is a low-latency, high-bandwidth mechanism for transferring data between two processes via shared memory. This BTL can only be used between processes executing on the same node.
Beginning with the v1.8 series, the vader BTL replaces the sm BTL
Shows all the MCA parameters for all BTL components that ompi_info finds.
# to show all MCA parameters: ompi_info --param btl all
$ ompi_info --param btl all --level 9
MCA btl: self (MCA v2.1.0, API v3.1.0, Component v4.0.3) MCA btl: openib (MCA v2.1.0, API v3.1.0, Component v4.0.3) MCA btl: tcp (MCA v2.1.0, API v3.1.0, Component v4.0.3) MCA btl: vader (MCA v2.1.0, API v3.1.0, Component v4.0.3)

NOTE: the UCX PML is now the preferred method of InfiniBand support in openMPI 4.0.x (btl is built-in methods)

Exported Environment Variables

All environment variables that are named in the form OMPI_* will automatically be exported to new processes on the local and remote nodes. Environmental parameters can also be set/forwarded to the new processes using the MCA parameter mca_base_env_list. The -x option to mpirun has been deprecated, but the syntax of the MCA param follows that prior example. While the syntax of the -x option and MCA param allows the definition of new variables, note that the parser for these options are currently not very sophisticated - it does not even understand quoted values. Users are advised to set variables in the environment and use the option to export them; not to define them.

Setting MCA Parameters

The -mca switch allows the passing of parameters to various MCA (Modular Component Architecture) modules. MCA modules have direct impact on MPI programs because they allow tunable parameters to be set at run time (such as which BTL communication device driver to use, what parameters to pass to that BTL, etc.).
The -mca switch takes two arguments: <key> and <value>. The <key> argument generally specifies which MCA module will receive the value. For example, the <key> "btl" is used to select which BTL to be used for transporting MPI messages. The <value> argument is the value that is passed. For example:
mpirun -mca btl tcp,self -np 1 foo
Tells Open MPI to use the "tcp" and "self" BTLs, and to run a single copy of "foo" an allocated node.
mpirun -mca btl self -np 1 foo
Tells Open MPI to use the "self" BTL, and to run a single copy of "foo" an allocated node.
The -mca switch can be used multiple times to specify different <key> and/or <value> arguments. If the same <key> is specified more than once, the <value>s are concatenated with a comma (",") separating them.
Note that the -mca switch is simply a shortcut for setting environment variables. The same effect may be accomplished by setting corresponding environment variables before running mpirun. The form of the environment variables that Open MPI sets is:
OMPI_MCA_<key>=<value>
Thus, the -mca switch overrides any previously set environment variables. The -mca settings similarly override MCA parameters set in the $OPAL_PREFIX/etc/openmpi-mca-params.conf or $HOME/.openmpi/mca-params.conf file.
Unknown <key> arguments are still set as environment variable -- they are not checked (by mpirun) for correctness. Illegal or incorrect <value> arguments may or may not be reported -- it depends on the specific MCA module.
To find the available component types under the MCA architecture, or to find the available parameters for a specific component, use the ompi_info command. See the ompi_info(1) man page for detailed information on the command.


B4.  What is processor affinity? 
Open MPI supports processor affinity on a variety of systems through process binding, in which each MPI process, along with its threads, is "bound" to a specific subset of processing resources (cores, sockets, etc.). 
Affinity can improve performance by inhibiting excessive process movement — for example, away from "hot" caches or NUMA memory. Judicious bindings can improve performance by reducing resource contention (by spreading processes apart from one another) or improving interprocess communications (by placing processes close to one another).
Note that processor affinity probably should not be used when a node is over-subscribed (i.e., more processes are launched than there are processors). 
memory affinity? Simply: some memory will be faster to access (for a given process) than others.
see if your system is supported processor affinity?/memory affinity?
$ ompi_info | grep hwloc
         MCA hwloc: hwloc191 (MCA v2.0, API v2.0, Component v1.8.4)

B5 tell Open MPI to use processor and/or memory affinity
Q19 https://www.open-mpi.org/faq/?category=tuning
  • --byslot: Alias for --bycore.
  • --bycore: When laying out processes, put sequential MPI processes on adjacent processor cores. *(Default)*
  • --bysocket: When laying out processes, put sequential MPI processes on adjacent processor sockets.
  • --bynode: When laying out processes, put sequential MPI processes on adjacent nodes.
The use of processor and memory affinity evolved rapidly, starting with Open MPI version:
B6 Mapping, Ranking, and Binding: Oh My! Open MPI employs a three-phase procedure for assigning process locations and ranks:
mapping
Assigns a default location to each process
ranking
Assigns an MPI_COMM_WORLD rank value to each process
binding
Constrains each process to run on specific processors
The mapping step is used to assign a default location to each process based on the mapper being employed. Mapping by slot, node, and sequentially results in the assignment of the processes to the node level. In contrast, mapping by object, allows the mapper to assign the process to an actual object on each node.
Note: the location assigned to the process is independent of where it will be bound - the assignment is used solely as input to the binding algorithm.
The mapping of process processes to nodes can be defined not just with general policies but also, if necessary, using arbitrary mappings that cannot be described by a simple policy. One can use the "sequential mapper," which reads the hostfile line by line, assigning processes to nodes in whatever order the hostfile specifies. Use the -mca rmaps seq option. For example, using the same hostfile as before:
mpirun -hostfile myhostfile -mca rmaps seq ./a.out
will launch three processes, one on each of nodes aa, bb, and cc, respectively. The slot counts don’t matter; one process is launched per line on whatever node is listed on the line.
Another way to specify arbitrary mappings is with a rankfile, which gives you detailed control over process binding as well. Rankfiles are discussed below.
The second phase focuses on the ranking of the process within the job’s MPI_COMM_WORLD. Open MPI separates this from the mapping procedure to allow more flexibility in the relative placement of MPI processes. This is best illustrated by considering the following two cases where we used the —map-by ppr:2:socket option:
node aa node bb
rank-by core 0 1 ! 2 3 4 5 ! 6 7
rank-by socket 0 2 ! 1 3 4 6 ! 5 7
rank-by socket:span 0 4 ! 1 5 2 6 ! 3 7
Ranking by core and by slot provide the identical result - a simple progression of MPI_COMM_WORLD ranks across each node. Ranking by socket does a round-robin ranking within each node until all processes have been assigned an MCW rank, and then progresses to the next node. Adding the span modifier to the ranking directive causes the ranking algorithm to treat the entire allocation as a single entity - thus, the MCW ranks are assigned across all sockets before circling back around to the beginning.
The binding phase actually binds each process to a given set of processors. This can improve performance if the operating system is placing processes suboptimally. For example, it might oversubscribe some multi-core processor sockets, leaving other sockets idle; this can lead processes to contend unnecessarily for common resources. Or, it might spread processes out too widely; this can be suboptimal if application performance is sensitive to interprocess communication costs. Binding can also keep the operating system from migrating processes excessively, regardless of how optimally those processes were placed to begin with.
The processors to be used for binding can be identified in terms of topological groupings - e.g., binding to an l3cache will bind each process to all processors within the scope of a single L3 cache within their assigned location. Thus, if a process is assigned by the mapper to a certain socket, then a —bind-to l3cache directive will cause the process to be bound to the processors that share a single L3 cache within that socket.
Alternatively, processes can be assigned to processors based on their local rank on a node using the --bind-to cpu-list:ordered option with an associated --cpu-list "0,2,5". In this example, the first process on a node will be bound to cpu 0, the second process on the node will be bound to cpu 2, and the third process on the node will be bound to cpu 5. --bind-to will also accept cpulist:ortered as a synonym to cpu-list:ordered. Note that an error will result if more processes are assigned to a node than cpus are provided.
To help balance loads, the binding directive uses a round-robin method when binding to levels lower than used in the mapper. For example, consider the case where a job is mapped to the socket level, and then bound to core. Each socket will have multiple cores, so if multiple processes are mapped to a given socket, the binding algorithm will assign each process located to a socket to a unique core in a round-robin manner.
Alternatively, processes mapped by l2cache and then bound to socket will simply be bound to all the processors in the socket where they are located. In this manner, users can exert detailed control over relative MCW rank location and binding.
Finally, --report-bindings can be used to report bindings.
As an example, consider a node with two processor sockets, each comprising four cores. We run mpirun with -np 4 --report-bindings and the following additional options:
% mpirun ... --map-by core --bind-to core [...] ... binding child [...,0] to cpus 0001 [...] ... binding child [...,1] to cpus 0002 [...] ... binding child [...,2] to cpus 0004 [...] ... binding child [...,3] to cpus 0008
% mpirun ... --map-by socket --bind-to socket [...] ... binding child [...,0] to socket 0 cpus 000f [...] ... binding child [...,1] to socket 1 cpus 00f0 [...] ... binding child [...,2] to socket 0 cpus 000f [...] ... binding child [...,3] to socket 1 cpus 00f0
% mpirun ... --map-by core:PE=2 --bind-to core [...] ... binding child [...,0] to cpus 0003 [...] ... binding child [...,1] to cpus 000c [...] ... binding child [...,2] to cpus 0030 [...] ... binding child [...,3] to cpus 00c0
% mpirun ... --bind-to none
Here, --report-bindings shows the binding of each process as a mask. In the first case, the processes bind to successive cores as indicated by the masks 0001, 0002, 0004, and 0008. In the second case, processes bind to all cores on successive sockets as indicated by the masks 000f and 00f0. The processes cycle through the processor sockets in a round-robin fashion as many times as are needed. In the third case, the masks show us that 2 cores have been bound per process. In the fourth case, binding is turned off and no bindings are reported.
Open MPI’s support for process binding depends on the underlying operating system. Therefore, certain process binding options may not be available on every system.
Process binding can also be set with MCA parameters. Their usage is less convenient than that of mpirun options. On the other hand, MCA parameters can be set not only on the mpirun command line, but alternatively in a system or user mca-params.conf file or as environment variables, as described in the MCA section below. Some examples include:
mpirun option MCA parameter key value
--map-by core rmaps_base_mapping_policy core --map-by socket rmaps_base_mapping_policy socket --rank-by core rmaps_base_ranking_policy core --bind-to core hwloc_base_binding_policy core --bind-to socket hwloc_base_binding_policy socket --bind-to none hwloc_base_binding_policy none

Portable Hardware Locality(HWLOC) (included in OpenMPI)


The Portable Hardware Locality (hwloc) software package provides a portable abstraction (across OS, versions, architectures, ...) of the hierarchical topology of modern architectures, including NUMA memory nodes, sockets, shared caches, cores and simultaneous multithreading. It also gathers various system attributes such as cache and memory information as well as the locality of I/O devices such as network interfaces, InfiniBand HCAs or GPUs.

MPI InfiniBand, RoCE, and iWARP communications

https://www.open-mpi.org/faq/?category=openfabrics#ib-components

support for high-speed interconnect networks

https://www.open-mpi.org/faq/?category=openfabrics#run-ucx
https://www.open-mpi.org/faq/?category=building#build-p2p

Compare transport mechanism: https://blogs.cisco.com/performance/the-vader-shared-memory-transport-in-open-mpi-now-featuring-3-flavors-of-zero-copy


II. OMPI_MCA_btl

ompi_info --param btl all MCA btl: self (MCA v2.1.0, API v3.1.0, Component v4.1.0) MCA btl: openib (MCA v2.1.0, API v3.1.0, Component v4.1.0) MCA btl: tcp (MCA v2.1.0, API v3.1.0, Component v4.1.0) MCA btl: vader (MCA v2.1.0, API v3.1.0, Component v4.1.0)

What is the vader BTL?

The vader BTL is a low-latency, high-bandwidth mechanism for transferring data between two processes via shared memory. This BTL can only be used between processes executing on the same node.

Beginning with the v1.8 series, the vader BTL replaces the sm BTL


III. Using UCX with OpenMPI

http://openucx.github.io/ucx/faq.html
#1.a. See all available transports of OMPI:
module load openmpi
ompi_info |grep btl
MCA btl: self (MCA v2.1.0, API v3.1.0, Component v4.0.3)
MCA btl: openib (MCA v2.1.0, API v3.1.0, Component v4.0.3) 
MCA btl: tcp (MCA v2.1.0, API v3.1.0, Component v4.0.3) 
MCA btl: vader (MCA v2.1.0, API v3.1.0, Component v4.0.3)
#1.b. See all available transports/Devices of UCX:
module load openmpi
ucx_info -d
# Transport: tcp
#   Device: eth0
#      capabilities:
#            bandwidth: 113.16 MB/sec
#   Device: eth1
#      capabilities:
#            bandwidth: 113.16 MB/sec
# Device: ib0
# capabilities: # bandwidth: 4457.00 MB/sec
# Transport: self # Device: self # capabilities: # bandwidth: 6911.00 MB/sec # Transport: mm # Device: sysv # capabilities: # bandwidth: 12179.00 MB/sec
# Device: posix # capabilities: # bandwidth: 12179.00 MB/sec
#   Transport: ud   (or ud_verbs)
#   Device: mlx4_0:1
#      capabilities:
#            bandwidth: 3774.15 MB/sec

#   Transport: rc   (or rc_verbs)
#   Device: mlx4_0:1
#      capabilities:
#            bandwidth: 3774.15 MB/sec
#   Transport: cm
#   Device: mlx4_0:1
#      capabilities:
#            bandwidth: 2985.42 MB/sec

#   Transport: knem
#   Device: knem
#      capabilities:
#            bandwidth: 13862.00 MB/sec

2.a. Force to use UCX

export OMPI_MCA_btl=^vader,tcp,openib,uct
export OMPI_MCA_pml=ucx

2.b. Choose a specific transport/device

export UCX_TLS=self,mm,knem,sm,ud,rc,tcp
export UCX_NET_DEVICES=mlx4_0:1,ib0

No comments:

Post a Comment