Wednesday, March 25, 2020

Hybrid OpenMP+MPI

Hybrid OpenMP+MPI Programs

Of course, you can have hybrid code mixing MPI and OpenMP primitives.
  • You need to compile the code with the -qopenmp (with Intel MPI) or -fopenmp (for the other MPI suits) flags
  • You need to adapt the OMP_NUM_THREADS environment variable accordingly
  • (Slurm only): you need to adapt the value -c <N> (or --cpus-per-task <N>) to set the number of OpenMP threads you wish to use per MPI process
  • (OAR only): you have to take the following elements into account:
    • You need to compute accurately the number of MPI processes per node <PPN> (in addition to the number of MPI processes) and pass it to mpirun
      • OpenMPI: mpirun -npernode <PPN> -np <N>
      • Intel MPI: mpirun -perhost <PPN> -np <N>
      • MVAPICH2: mpirun -ppn <PPN> -np <N>
    • You need to ensure the environment variable OMP_NUM_THREADS is shared across the nodes
    • (Intel MPI only) you probably want to set I_MPI_PIN_DOMAIN=omp
    • (MVAPICH2 only) you probably want to set MV2_ENABLE_AFFINITY=0
https://ulhpc-tutorials.readthedocs.io/en/latest/parallel/basics/

No comments:

Post a Comment