Monday, December 23, 2019

Working on a SGE cluster

Using SGE:                                     qconf -help
* check all queuename (-q):                    qconf -sql
* check all parallel environments (-pe):  qconf -spl
* consider the allocation rule:  qconf -sp mpi

* setup modulefiles: edit file "/home1/p001cao/.bashrc", add line
module use /home1/p001cao/local/1myModfiles

Ia. OpenMPI-4.0.2 with Intel-xe2019u5 (USC2)

module load intel/compiler-xe19u5
module load compiler/gcc/7.4.0

check: icpc -v

tar xvzf openmpi-4.0.3.tar.gz
cd openmpi-4.0.3
mkdir build_IB
cd build_IB

# install without ucx
../configure CC=icc CXX=icpc FC=ifort F77=ifort \
--with-sge --with-verbs --without-cma --without-ucx \
--prefix=/home1/p001cao/local/app/openmpi/4.0.3-intelxe19u5 
# install with ucx:
- install ucx
- install Mellanox-ofed (need to use ucx)
https://docs.mellanox.com/display/MLNXOFEDv451010/Installing+Mellanox+OFED
https://www.mellanox.com/products/infiniband-drivers/linux/mlnx_ofed
tar xvzf MLNX_OFED_LINUX-4.7-3.2.9.0-rhel6.9-x86_64.tgz
./mlnxofedinstall

../configure CC=icc CXX=icpc FC=ifort F77=ifort \
--with-sge --with-verbs --without-cma --enable-mca-no-build=btl-uct \
--with-ucx=/home1/p001cao/local/app/ucx-1.7 \
--with-mxm-libdir=/opt/mellanox/mxm/lib \
--prefix=/home1/p001cao/local/app/openmpi/4.0.3-intelxe19u5_ucx
Note:
* --with-verbs : to use infiniband (older version is --with-openib)
* for open4.0 or later, ucx is used by default, if don't want to use ucx, then must set:
export OMPI_MCA_btl_openib_allow_ib=1
export OMPI_MCA_btl_openib_if_include="mlx4_0:1"
* ucx notwork, so far
https://github.com/openucx/ucx/wiki/OpenMPI-and-OpenSHMEM-installation-with-UCX
https://developer.arm.com/tools-and-software/server-and-hpc/help/porting-and-tuning/building-open-mpi-with-openucx/single-page

Ib. OpenMPI-3.1.5

module load intel/compiler-xe19u5
module load compiler/gcc/7.4.0

tar xvzf openmpi-3.1.5.tar.gz
cd openmpi-3.1.5
mkdir build
cd build
../configure CC=icc CXX=icpc FC=ifort F77=ifort \
--with-sge --with-verbs  --without-cma \
--prefix=/home1/p001cao/local/app/openmpi/3.1.5-intelxe19u5 

Ic. IMPI

# add variable into qbash
For IB:
export I_MPI_FABRICS= shm:dapl
export I_MPI_DAPL_PROVIDER ofa-v2-mlx4_0-1

For Ethernet:
export I_MPI_FABRICS= shm:tcp

IMPI-2016:
export I_MPI_FABRICS=shm:dapl
export I_MPI_DALP_PROVIDER=ofa-v2-ib0
export I_MPI_DYNAMIC_CONNECTION=0

IMPI-2019:
export I_MPI_FABRICS=shm:ofi
export FI_PROVIDER=verbs
export I_MPI_DYNAMIC_CONNECTION=0
###################
The Intel® MPI Library switched from the Open Fabrics Alliance* (OFA) framework to the Open Fabrics Interfaces* (OFI) framework and currently supports libfabric*. 
Since IMPI 2019, IMPI discontinue support of the following fabrics that can be specified by I_MPI_FABRICS:
- TCP
- OFA
- DAPL
- TMI
Currently, IMPI 2019 supports only OFI (intra-/internode) and SHM (intranode) fabrics. OFI is a fraemwork that has replacements for all previous fabrics. Those replacements are called OFI providers:
- TCP fabric - sockets OFI provider
- OFA and DAPL fabrics - verbs OFI provider
- TMI - psm2 OFI provider
The provider can be specified by FI_PROVIDER='OFI provider name' (e.g. FI_PROVIDER=psm2 to use Intel OPA fabric; FI_PROVIDER=sockets to use Ethernet (or OPA)). OFI discovers all available hardware and maps them on an appropriate OFI provider (e.g. psm2 - Intel OPA, verbs - IB/OPA/iWARP/RoCE, sockets - Ethernet or OPA/IB(over IPoOPA/IPoIB)). User can specify which IP interface (IPoIB or IPoOPA - e.g. ib0; Ethernet - e.g. eth0) should be used for OFI/sockets provider by specifying FI_SOCKETS_IFACE='IP interface name'.
https://software.intel.com/en-us/articles/intel-mpi-library-2019-over-libfabric

all MPI variables:
https://software.intel.com/en-us/mpi-developer-reference-linux-communication-fabrics-control


II. Miniconda (USC2)

download:    
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh

1. install:
bash Miniconda3-latest-Linux-x86_64.sh

choose folder to install: /home1/p001cao/local/miniconda3...
running conda init? NO
2. modules:
set     topdir          /home1/p001cao/local/miniconda3
prepend-path    PATH                    $topdir/bin
prepend-path    LD_LIBRARY_PATH         $topdir/lib
prepend-path    INCLUDE                 $topdir/include
python envs:

module load conda/conda3
conda create -n     py37ompi     python=3.7

set     topdir    /home1/p001cao/local/miniconda3/envs/py37ompi
prepend-path    PATH                    $topdir/bin
prepend-path    LD_LIBRARY_PATH         $topdir/lib
prepend-path    INCLUDE                 $topdir/include
3. install pkgs:
module load conda/conda3
source activate py37ompi

conda install numpy scipy scikit-learn pandas

Voro++, Ovito:
pip install tess ovito

 mpi4py with OpenMPI:
conda search -c intel       mpi4py
conda install -c conda-forge mpi4py=3.0.3=py37hd0bea5a_0

google jax:
module load conda/conda3
source activate  py37                    # use for plumed
pip install jax jaxlib


Monday, December 9, 2019

Origin Templates

https://www.originlab.com/doc/Origin-Help/Graph-Template-Gallery

I. Working with Templates:

I.1. Save a graph as Theme:
on Graph
File --> Save Template as

II.2. Use a Theme:
on workbook
Plot--> Template Library

II. Working with Themes:

II.1. Save a graph as Theme:
on Graph
Right click --> Save Format as Theme

II.2. Use a Theme:
on Graph
Preferences --> Theme Organizer (F7)

Friday, November 29, 2019

Installing mpi4py & Voro++

I. Install mpi4py

1. on Window

Note: 
- install Visual Studio Build Tools at this link: http://go.microsoft.com/fwlink/?LinkId=691126&fixForIE=.exe
- download Microsoft MPI v10.1, install both msmpisdk.msi and msmpisetup.exe
https://docs.microsoft.com/en-us/message-passing-interface/microsoft-mpi
#check: mpiexec -help

1.1a. install mpi4py with conda: 
# for intel MPI 
pip install mpi4py impi

1.1b.  install mpi4py on Python without Conda: (being used)
install another python3.7 independent to Conda
https://www.python.org/downloads/windows/
pip install mpi4py numpy scipy 
pip search mpi 
pip install impi

# register mpiexec
mpiexec -register
windows user & password
test:       mpiexec -np 2 python -m mpi4py.bench helloworld                       

1.2 Run:

- with anaconda: open Anaconda_Prompt
 $ mpiexec -np 8 python script.py

- without anaconda: open cmd as administrator

 $ mpiexec -np 8 python script.py


2. on Linux

make sure mpi4py link to right MPI lib (OpenMPI or MPICH,  or Impi), then use right command to run: 
mpirun -np 5 python
# or
mpiexec -np 5 python

2.1. Install with installing conda-mpi:  
to chose right version mpi4py, have to attach option [-c CHENNAL] in conda install,
    conda-forge
    intel
    bioconda
    anaconda
Note: conda-forge contains both version of mpi4py for openmpi and mpich
conda search -c conda-forge mpi4py     # find package
conda search -c intel       mpi4py   
conda search -c anaconda    mpi4py   

conda install [-c chennal] <package_name>=<version>=<build_string>

conda create --name new_name --clone old_name
conda remove --name old_name --all
# for mpi4py with OpenMPI(being used)
module load conda/conda3
conda create  -n  py37ompi python=3.7 scipy numpy scikit-learn

source activate   py37ompi
pip install tess ovito                  # voro++ 

conda install -c conda-forge mpi4py=3.0.3=py37hd0bea5a_0


for mpich 
conda create --name py37mpich --clone py37ompi
source activate    py37mpich
conda uninstall mpi4py mpi

conda install -c conda-forge mpi4py=3.0.3=py37hcf07815_0  

# for intel MPI (just support python 3.6)
conda create  -n  py36impi python=3.6 scipy numpy scikit-learn
source activate    py36impi 
conda install -c intel mpi4py=3.0.0=py36_intel_0 
pip install tess ovito 

TEST:
mpirun -np 5 python -m mpi4py.bench helloworld
## ------
Hello, World! I am process 0 of 5 on leopard.
Hello, World! I am process 1 of 5 on leopard.
Hello, World! I am process 2 of 5 on leopard.
Hello, World! I am process 3 of 5 on leopard.

Hello, World! I am process 4 of 5 on leopard.

NOTE: there are 3 env for python with include conda-mpi: py36mpi, py37mpi, py27mpi
 - but load conda-mpi maybe cause unexpected conflict with other MPI. So consider to install mpi4py only, without include conda-mpi --> use pip install

2.2. Install without installing conda-mpi:  (being used)
Note: using conda to install mpi4py, it also installsconda-mpi which cannot control and may conflict with other MPI. To use openMPI we want, must use pip install: (but this normally fail to link MPI compiler with python36, work with python37)
module load conda/conda3

conda create  -n  py37 python=3.7 scipy numpy 


source activate   py37
pip install mpi4py tess

TEST: (this word on Centos7)
module load mpi/openmpi4.0.2-Intel2019xeU4
module load conda/py37

mpirun -np 5 python -m mpi4py.bench helloworld
## ------

TEST: ( on Centos6 --> glibc error)
module load mpi/openmpi4.0.2-Intel2019xe     
module load conda/py37

mpirun -np 5 python -m mpi4py.bench helloworld

## ------


II. Install Voro++

module load conda3
source activate py37 
pip install     tess                                           #  voro++ library


Ref : https://github.com/abria/TeraStitcher/wiki/Multi-CPU-parallelization-using-MPI-and-Python-scripts
https://oncomputingwell.princeton.edu/2018/11/installing-and-running-mpi4py-on-the-cluster/

Tuesday, November 12, 2019

Pure python & envs using Pip

I. Python+pip on Linux:

1. Need OpenSSL to avoid error: (and configure ./configure --with-ssl)

wget https://www.openssl.org/source/openssl-1.1.1g.tar.gz
tar -xf openssl-1.1.1g.tar.gz
cd openssl-1.1.1g

./config --prefix=/home1/p001cao/local/app/tool_dev/openssl \
--openssldir=/home1/p001cao/local/app/tool_dev/openssl
make -j 8 
make install 

2. Need "libffi" to avoid "_ctypes" error
tar -xf libffi-3.3.tar.gz
cd libffi-3.3
./configure --prefix=/home1/p001cao/local/app/tool_dev/libffi-3.3

Download 

export PYTHON_VERSION=3.7.5

curl -O https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tgz tar -xvzf Python-${PYTHON_VERSION}.tgz cd Python-${PYTHON_VERSION}

Build and install Python#

Configure, make, and install Python:

./configure \ --prefix=/uhome/p001cao/local/app/python/${PYTHON_VERSION} \ --enable-shared \ --enable-ipv6 \ LDFLAGS=-Wl,-rpath=/uhome/p001cao/local/app/python/${PYTHON_VERSION}/lib,--disable-new-dtags

make -j 8 
make install 



REF; https://docs.rstudio.com/resources/install-python-source/


create module python3
set topdir /home1/p001cao/local/app/tool_dev/python37
module load openSSL
prepend-path    PATH                    $topdir/bin
prepend-path    INCLUDE         $topdir/include
prepend-path    LD_LIBRARY_PATH         $topdir/lib

Test: 
module load tool_dev/python37
python3 -V


python3 -m pip install --upgrade pip
python3 -m pip install --upgrade pip --trusted-host pypi.org --trusted-host files.pythonhosted.org

2. Setting Up a Virtual Environment

Setting up a programming environment provides us with greater control over our Python projects and over how different versions of packages are handled. 
venv (for Python 3) and virtualenv (for Python 2) allow you to manage separate package installations for different projects. They essentially allow you to create a “virtual” isolated Python installation and install packages into that virtual installation. When you switch projects, you can simply create a new virtual environment and not have to worry about breaking the packages installed in the other environments.
Choose which directory you would like to put your Python programming environments in, or create a new directory in installing dir of python:

cd  /uhome/p001cao/local/python/Python37
mkdir enviroments
cd  enviroments
module load python/python37
python3 -m venv    py37impi
source py37impi/bin/activate

create module python/python37
set               topdir            /uhome/p001cao/local/python/Python37/enviroments/py37impi
module load openSSL
prepend-path    PATH                    $topdir/bin
prepend-path    LD_LIBRARY_PATH         $topdir/lib
prepend-path    INCLUDE                 $topdir/include

if { [ module-info mode load ] } {
    puts stdout "source $topdir/bin/activate;"
} elseif { [ module-info mode remove ] } {
    puts stdout "deactivate"
}

2. Install packages:

load corresponding envs:
module load python/py37impi

python3 -m pip install --upgrade pip
pip3 install tess ovito numpy scipy matplotlib mpi4py impi




III. Some problems:
SSL:
https://joshspicer.com/python37-ssl-issue

wget https://www.openssl.org/source/openssl-1.0.2q.tar.gz
tar xvf openssl-1.0.2q.tar.gz
cd   openssl-1.0.2q
./config  --prefix=/uhome/p001cao/local/openssl-1.0.2
make
make install
create module openSSL
set               topdir           /uhome/p001cao/local/openssl-1.0.2
prepend-path    PATH                    $topdir/bin
prepend-path    LD_LIBRARY_PATH         $topdir/lib
prepend-path    INCLUDE                 $topdir/include


IV. Install package from source

cd sourceFolder
pip install -e .


Sunday, November 10, 2019

Window's subLinux (WSL)

https://www.tegakari.net/en/2020/08/windows-subsystem-for-linux-2wsl2_vol1/

I. Install WSL on Windows

https://docs.microsoft.com/en-us/windows/wsl/install-win10
There are 2 versions: Comparing WSL 2 and WSL 1. Should use WSL 2 for better performance.

a. Prepare
# Enable both the Windows Subsystem for Linux and the Virtual Machine Platform optional components.
Open PowerShell as an admin and run the following script: (After the script is complete, you need to reboot your machine, since this enables new Windows features.)
dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart
dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart

# Download & Install the Linux kernel update package
# Set WSL 2 as your default version: in Powershell
wsl --set-default-version 2

b. Installing WSL: 
# using Microsoft Store
go to: https://aka.ms/wslstore
and intsall a Linux shell
# not use Microsoft Store 
https://docs.microsoft.com/en-us/windows/wsl/install-manual
download: Ubuntu_2004.2020.424.0_x64.appx
in powerShell: Add-AppxPackage .\Ubuntu_2004.2020.424.0_x64.appx


c. Create a Unix Account
Open up a Command Prompt and run the following command:  > bash 

d. Location
on Window 10, "~home/user" directory locates at : 
## for WSL1
%localappdata%\lxss\home\{username}
C:\Users\{user}\AppData\Local\lxss\{username}
## for WSL2
\\wsl$\Ubuntu-20.04\home\{username}

## Hard drives: cd  /mnt/d/work/



II. Work on WSL

# Login root as default
open cmd:   ubuntu config --default-user root

# Update Ubuntu: sudo apt update && apt upgrade

# delete folder/file
sudo -r -f \folder

# install GCC
sudo apt-get install gcc-10 g++-10 gfortran-10

# install python
# check:   python --version
## set isntall python3 as default (similar for python2, )
sudo apt install python-is-python3 python-dev python3-dev

1. Insall Modules
sudo apt-get install tcl environment-modules

2. install cuda
2a. Install NVIDIA driver for WSL2
https://docs.nvidia.com/cuda/wsl-user-guide/index.html
460.20_gameready_win10-dch_64bit_international.exe

Note: Do not install any Linux display driver in WSL. The Windows Display Driver will install both the regular driver components for native Windows and for WSL support.

2b. Install cuda-toolkit for WSL2
https://docs.nvidia.com/cuda/wsl-user-guide/index.html
# Launch Ubuntu terminal

# First, set up the CUDA network repository. 

sudo apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/7fa2af80.pub
sudo sh -c 'echo "deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 /" > /etc/apt/sources.list.d/cuda.list'
sudo apt-get update
# Now install CUDA. Note that for WSL 2, you should use the cuda-toolkit-<version>
#(Do not choose the cuda, cuda-11-0, or cuda-drivers meta-packages under WSL 2)
sudo apt-get install -y cuda-toolkit-11-0
#test  (path: \\wsl$\Ubuntu-20.04\usr\local\cuda-11.1)
nvcc --version # cuda version

NOTE: gpu works with Window Insider Prebuild 20236 -->
        


3. OpenMPI + gcc +gpu
cd wSourceCode
tar xvf openmpi-4.1.0rc3.tar.gz
cd openmpi-4.1.0rc3
mkdir buildGCC-cuda && cd buildGCC-cuda
##
export myCUDA=/usr/local/cuda-11.0
../configure CC=gcc CXX=g++ FC=gfortran F77=gfortran \
--with-sge --without-verbs --without-ucx --with-cuda=${myCUDA} \
--prefix=/opt/app/openmpi/4.1.0-gcc9.3-cuda11
make
sudo make install

## use,
export PATH=/opt/app/openmpi/4.1.0-gcc9.3-cuda11/bin:$PATH
export LD_LIBRARY_PATH=/opt/app/openmpi/4.1.0-gcc9.3-cuda11/lib:$LD_LIBRARY_PATH

## make Module files (\\wsl$\Ubuntu-20.04\home\tha\1moduleFiles)
# module use /home/tha/1moduleFiles
set topdir /opt/app/openmpi/4.1.0-gcc9.3-cuda11 prepend-path PATH $topdir/bin prepend-path LD_LIBRARY_PATH $topdir/lib prepend-path INCLUDE $topdir/include prepend-path PKG_CONFIG_PATH $topdir/lib/pkgconfig # this is required

4. Compile lammps
### we have GTX 1060 with Pascal architecture
sudo apt install zlib1g-dev ocl-icd-opencl-dev  pkg-config
sudo apt-get install libblas-dev liblapack-dev libgsl-dev   #for plumed
##
git clone --branch master https://github.com/lammps/lammps.git lammps_master
cd lammps_master
git pull origin master
mkdir build-gpu && cd build-gpu

##
module load cmake-3.18.3
module load ompi/4.1.0-gcc9.3-cuda11
module load cuda-11.2
#--
export PATH=$PATH:/opt/app/openmpi/4.1.0-gcc9.3-cuda11/bin
export CC=mpicc
export CXX=mpic++
export FORTRAN=mpifort
cmake ../cmake -C ../cmake/presets/all_on.cmake \
-DLAMMPS_EXCEPTIONS=yes -DBUILD_MPI=yes -DBUILD_OMP=yes -DLAMMPS_MACHINE=mpi \
-DPKG_USER-OMP=yes -DPKG_USER-INTEL=no -DPKG_KOKKOS=yes \
 -DPKG_GPU=yes -DGPU_API=cuda -D GPU_ARCH=sm_61 \
-DPKG_USER-SMD=yes -DDOWNLOAD_EIGEN3=yes -DDOWNLOAD_VORO=yes \
-DPKG_KIM=no -DDOWNLOAD_KIM=no -DPKG_LATTE=no -DPKG_MSCG=no -DPKG_USER-ATC=no -DPKG_USER-MESONT=no  \
-DPKG_USER-ADIOS=no -DPKG_USER-NETCDF=no -DPKG_USER-QUIP=no -DPKG_USER-SCAFACOS=no \
-DPKG_USER-VTK=no -DPKG_USER-H5MD=no \
-DPKG_USER-PLUMED=yes -DDOWNLOAD_PLUMED=yes \
-DCMAKE_C_COMPILER=mpicc -DCMAKE_CXX_COMPILER=mpic++ -DCMAKE_Fortran_COMPILER=mpifort \
-DCMAKE_INSTALL_PREFIX=/opt/app/lammps/master-gpu
make -j 8
sudo make install
####################
-DGPU_API=cuda -D GPU_ARCH=sm_61



III. Install Anaconda on WSL:

    wget https://repo.continuum.io/archive/Anaconda3-2019.07-Linux-x86_64.sh
  • Run installation :  bash Anaconda3-2019.07-Linux-x86_64.sh
  • Update conda (optional):   conda update conda

Manually add the Anaconda bin folder to your PATH:

1. open file  "~/.bashrc"  (note: use Vim in Linux, donot open using Windows explorer)
(C:\Users\thang\AppData\Local\Packages\CanonicalGroupLimited.UbuntuonWindows_79rhkp1fndgsc\LocalState\rootfs\home\tha)

2. added :      export PATH=/home/tha/anaconda3/bin:$PATH

Test the installed python:


  • open Sub_Linux_Prompt, and type: which python

/home/tha/anaconda3/bin/python

II. Install Xeus-cling Kernel for Jupyter notebook on Window's subLinux (WSL)

3. Install Xeus-cling

- Assume Conda is installed
open Sub_Linux_Prompt, and type:         conda install xeus-cling notebook -c QuantStack -c conda-forge
Update xeus-cling (optional):    conda upate xeus-cling

4. set the BROWSER for WSL:

add the following command to the bottom of my ~/.bashrc file.

vim .bashrc 

export BROWSER='/mnt/c/Program Files (x86)/Google/Chrome/Application/chrome.exe'


NOTE: open .bashrc file using Windows may cause "permission error" on next launch. To solve it, type:
sudo chmod -R 777 /home/thang/.bashrc

5. Using:

  • open "Sub_Linux_Prompt" from an "arbitrary folder": Shift + Right Click and selecting the Open Linux shell here 
  • type: jupyter notebook
  • copy the last URL in Sub_Linux_Prompt to IE

Ref.:
https://github.com/QuantStack/xeus-cling/blob/master/README.md
https://libinruan.github.io/2018/11/06/Install-Jupyter-s-C-kernel-on-Windows-Subsystem-for-Linux/
https://gist.github.com/kauffmanes/5e74916617f9993bc3479f401dfec7da
https://www.howtoforge.com/vim-basics