4. Speos HPC using the GPU Resources

HPC simulations are not limited to CPU. It can run a simulation using GPU resources thanks to CUDA cores. Better performance results have been achieved using GPU resources.

As of now, a GPU simulation requires 32 optishpc capabilities per GPU.

4.1. GPU Requirements

NVIDIA Quadro P6000 graphics adapter with a 525.89.02 NVIDIA driver.

Only NVIDIA GPU graphics adapters are supported.


Caution:  Be aware that the RAM needed for your simulation depends on your project. For instance, you may have a huge number of sensors with a high resolution which will require substantial RAM.


4.2. GPU Limitations

For more information on GPU Limitation refer to GPU Simulation Limitations.

4.3. Configuring the Scripts Files

Script Files in the context of GPU Simulations are the same script files as the CPU Simulation. You will just need to add the -gpu switch to the command line in the script.

4.3.1. Configuring the Scripts File for Speos HPC on Linux

On the Linux platform, two scripts can incorporate the -gpu switch:

  • RunMySimulation Script (for PBS and SLURM)

  • CheckMySimulation Script (for PBS and SLURM)

Warning:  SPEOS HPC cannot target a specific number of GPU adapters. It will take all GPU adapters in the node.

4.3.1.1. PBS – RunMySimulation Script

If you want to run a simulation using the GPU of the cluster, you can add the -gpu switch to the command line as shown in the following script:


Important:  You need to modify the script by adding the LICENSE_SERVER and the correct version used vXXX.



Note:  If you need to adjust the number of GPU devices to be used by Speos HPC when running a simulation, you can add the following command line:

export CUDA_VISIBLE_DEVICES="0;1;2"

0,1,2 signifies that three GPU devices will be used to run the simulation.


#!/bin/bash
#PBS -N "$(JOBNAME)"
#PBS -l walltime=00:$(WALLCLOCK):00
#PBS -q main
#PBS -o "$(FILE_OUTPUT)"
#PBS -e "$(FILE_ERROR)"
#PBS -l select=$(NODES):mpiprocs=1
#PBS -l place=scatter


# Load the user environment
export ANSYSLMD_LICENSE_FILE=1055@LICENSE_SERVER
export TMPDIR=/tmp
export PBS_O_PATH=$PATH

source /ansys_inc/vXXX/commonfiles/MPI/Intel/2021.8.0/linx64/env/vars.sh

###############################
# SPEOSHPC command line options
###############################

# Distribute my SPEOSHPC simulation on 2 computers for 10 seconds in this example.
mpiexec.hydra -launcher ssh -np $(NODES) "$(SPEOSHPC_EXE)" -speos "$(SV5PATH)" $(PARAM) $(LANG) -threads 50 -mpi IntelMPI -gpu
# NOTE: if you want to run simulation using GPU of the cluster, add the -gpu switch in the command line.
# NOTE: the -threads command line is useless when using the -gpu switch in the command line.	


qstat --version
mpiexec -V | head -n 1
ldd --version | head -n 1
cat /etc/*release* | head -n 4
4.3.1.2. SLURM – RunMySimulation Script

If you want to run a simulation using the GPU of the cluster, you can add the -gpu switch to the command line as shown in the following script:


Important:  You need to modify the script by adding the LICENSE_SERVER and the correct version used vXXX.



Note:  If you need to adjust the number of GPU devices to be used by Speos HPC when running a simulation, you can add the following command line:

export CUDA_VISIBLE_DEVICES="0;1;2"

0,1,2 signifies that three GPU devices will be used to run the simulation.


#!/bin/bash
#SBATCH -o "$(FILE_OUTPUT)"
#SBATCH -e "$(FILE_ERROR)"
#SBATCH -J "$(JOBNAME)"
#SBATCH -n $(NODES)
# spread the tasks evenly among the nodes
#SBATCH --ntasks-per-node=1
#SBATCH --exclusive
#SBATCH -t 00:$(WALLCLOCK):00

# Load the INTEL redistribuables

export ANSYSLMD_LICENSE_FILE=LICENSE_SERVER
export TMPDIR="/tmp"

###############################
# SPEOSHPC command line options
###############################
	
. /ansys_inc/vXXX/commonfiles/MPI/Intel/2021.8.0/linx64/env/vars.sh

hostlist=$(scontrol show node ${SLURM_JOB_NODELIST} | awk '/NodeAddr=/ {print $1}' | cut -f2 -d= | paste -sd ',' -)

mpiexec.hydra -launcher ssh -hosts ${hostlist} -ppn 1 "$(SPEOSHPC_EXE)" -speos "$(SV5PATH)" $(PARAM) $(LANG) -threads 64 -mpi IntelMPI -gpu
# NOTE: if you want to run simulation using GPU of the cluster, add the -gpu switch in the command line.
# NOTE: the -threads command line is useless when using the -gpu switch in the command line.	


sinfo –version
mpiexec -V | head -n 1
ldd --version | head -n 1
cat /etc/*release* | grep "PRETTY"
locale | head -n 1
4.3.1.3. PBS – CheckMySimulation Script

If you want to run a simulation using the GPU of the cluster, you can add the -gpu switch to the command line as shown in the following script:


Important:  You need to modify the script by adding the LICENSE_SERVER and the correct version used vXXX.


#!/bin/bash
#PBS -N "$(JOBNAME)"
#PBS -l walltime=00:$(WALLCLOCK):00
#PBS -q main
#PBS -o "$(FILE_OUTPUT)"
#PBS -e "$(FILE_ERROR)"
#PBS -l select=1:mpiprocs=1
#PBS -l place=scatter


# Load the user environment
export ANSYSLMD_LICENSE_FILE=1055@LICENSE_SERVER
export TMPDIR=/tmp
export PBS_O_PATH=$PATH

source /ansys_inc/vXXX/commonfiles/MPI/Intel/2021.8.0/linx64/env/vars.sh

###############################
# SPEOSHPC command line options
###############################

# Distribute my SPEOSHPC simulation on 2 computers for 10 seconds in this example.
mpiexec.hydra -launcher ssh -np 1 "$(SPEOSHPC_EXE)" -speos "$(SV5PATH)" $(PARAM) $(LANG) -threads 50 -mpi IntelMPI -gpu
# NOTE: if you want to run simulation using GPU of the cluster, add the -gpu switch in the command line.
# NOTE: the -threads command line is useless when using the -gpu switch in the command line.	



	  qstat --version
        mpiexec -V | head -n 1
        ldd --version | head -n 1
        cat /etc/*release* | head -n 4
        locale | head -n 1
4.3.1.4. SLURM – CheckMySimulation Script

If you want to run a simulation using the GPU of the cluster, you can add the -gpu switch to the command line as shown in the following script:


Important:  You need to modify the script by adding the LICENSE_SERVER and the correct version used vXXX.


#!/bin/bash
#SBATCH -o "$(FILE_OUTPUT)"
#SBATCH -e "$(FILE_ERROR)"
#SBATCH -J "$(JOBNAME)"
#SBATCH -n 1
# spread the tasks evenly among the nodes
#SBATCH --ntasks-per-node=1
#SBATCH --exclusive
#SBATCH -t 00:$(WALLCLOCK):00

# Load the INTEL redistribuables

export ANSYSLMD_LICENSE_FILE=LICENSE_SERVER
export TMPDIR="/tmp"

	###############################
	# SPEOSHPC command line options
	###############################

. /ansys_inc/vXXX/commonfiles/MPI/Intel/2021.8.0/linx64/env/vars.sh

hostlist=$(scontrol show node ${SLURM_JOB_NODELIST} | awk '/NodeAddr=/ {print $1}' | cut -f2 -d= | paste -sd ',' -)

"/ansys_inc/vXXX/commonfiles/MPI/Intel/2021.8.0/linx64/bin/mpiexec.hydra" -launcher ssh -machinefile hosts -ppn 1 "$(SPEOSHPC_EXE)" -speos "$(SV5PATH)" $(PARAM) $(LANG) -threads 64 -mpi IntelMPI -gpu
# NOTE: if you want to run simulation using GPU of the cluster, add the -gpu switch in the command line.
# NOTE: the -threads command line is useless when using the -gpu switch in the command line.	


sinfo --version
mpiexec -V | head -n 1
ldd --version | head -n 1
cat /etc/*release* | grep "PRETTY"
locale | head -n 1

4.3.2. Configuring the Scripts File for Speos HPC on Windows

On the Windows platform, two scripts can incorporate the -gpu switch:

  • RunMySimulation Script

  • CheckMySimulation Script

Warning:  SPEOS HPC cannot target a specific number of GPU adapters. It will take all GPU adapters in the node.

4.3.2.1. RunMySimulation Script

If you want to run a simulation using the GPU of the cluster, you can add the -gpu switch to the command line as shown in the following script:


Important:  You need to modify the script by adding the LICENSE_SERVER..


echo off
chcp 65001
job submit /scheduler:"SchedulerName" /jobtemplate:default /jobname:"$(JOBNAME)" /stdout:"$(FILE_OUTPUT)" /stderr:"$(FILE_ERROR)" /numnodes:$(NODES)/env:ANSYSLMD_LICENSE_FILE=1055@LICENSE_SERVER  mpiexec -genvlist CCP_CLUSTER_NAME,CCP_JOBID,ANSYSLMD_LICENSE_FILE -hosts %%CCP_NODES%% "$(SPEOSHPC_EXE)" -sv5 "$(SV5PATH)" $(PARAM) $(LANG) -gpu

if %ERRORLEVEL% NEQ 0 (
	echo Submission error - error code %ERRORLEVEL%
	echo > "$(FILE_RUNKO)"
	)
4.3.2.2. CheckMySimulation Script

If you want to run a simulation using the GPU of the cluster, you can add the -gpu switch to the command line as shown in the following script:


Important:  You need to modify the script by adding the LICENSE_SERVER..


echo off
chcp 65001
job submit /scheduler:"SchedulerName" /jobtemplate:default /jobname:CheckSimulation /stdout:"$(FILE_OUTPUT)" /stderr:"$(FILE_ERROR)" /numnodes:1 mpiexec -genvlist CCP_CLUSTER_NAME,CCP_JOBID -n 1 "$(SPEOSHPC_EXE)" -sv5 "$(SV5PATH)" -check $(LANG) -gpu
	
if %ERRORLEVEL% NEQ 0 (
	echo Submission error - error code %ERRORLEVEL%
	echo > "$(FILE_CHECKKO)"
	)