2. Speos HPC on Linux

After ensuring that the cluster meets the system prerequisites for distributed computing, and that Speos HPC installation is complete, proceed to the configuration of the Speos HPC cluster environment.


Note:  The cluster must be configured by an Information Technician.


2.1. Testing the MPI Program

Once Speos HPC installation is complete, perform a communication test to verify that the MPI software is set up correctly.

  1. Log on to the Head Node of the cluster.

  2. Load the MPI Library in your environment.

  3. Launch ./SPEOSHPC.x.

    • If SPEOSHPC returns a version, the configuration is correct.

    • If you get an error when loading the shared library, check if the redists_SPEOSHPC_Intel folder is located in SPEOSHPC.x folder and if the MPI Library is correctly loaded in your environment.

2.2. Speos HPC on Linux using SPEOS Core

To run your simulation in a Linux environment using the SPEOS Core interface:

  1. Download and install an emulator and a SFTP client to be able to communicate with the cluster machine.

  2. Create script files to be stored in a shared folder on the cluster. These script files will be used as template for SPEOS Core users.

  3. Launch SPEOS Core and configure the cluster parameters  . To configure the cluster parameters, see Accessing a Linux Cluster from Windows in the Speos HPC User’s Guide.

2.2.1. Accessing a Linux Cluster from Windows

If you are a first time user willing to access the Speos HPC cluster, download and install PuTTY and WinSCP.

  • PuTTY allows you to connect to a Linux machine from a Windows machine, using a secured connection, and emulates a Linux console.

  • WinSCP is a SFTP client for Windows allowing you to transfer securely files from a local machine to a remote machine.

Configuring PuTTY
  1. Ask your Information Technician for the login access of the cluster Head Node (Host Name or IP address, and username).

  2. In the PuTTY Configuration window, in the Session category, specify the Head Node Host Name (or IP address) of the cluster.

    If the connection type that the Information Technician configured is SSH, you may require Public and Private Keys to access the Head Node.

  3. In the Connection category, in the Data sub-category, specify the Auto-login username.

  4. In the Session category, in the Saved Sessions field, name the session and click Save.

Configuring WinSCP

Once you have configured PuTTY, configure WinSCP to transfer the files between the Head Node and your computer.

  1. In the Login - WinSCP window, click Tools and select Import Sites.

  2. Select the Import from PuTTY.

  3. Select the site corresponding to the cluster.

    The site corresponds to the session you configured on PuTTY.

  4. Click OK.

2.2.2. Configuring the Script Files

Six simulation script template files must be configured to use the Speos HPC interface:

  • 1 script to check the simulation: CheckMySimulation

  • 1 script to run the simulation: RunMySimulation

    The CheckMySimulation and RunMySimulation scripts contain variables among the provided list that are automatically replaced by the parameters defined in the Speos HPC interface.

  • 1 script to submit the CheckMySimulation script to the scheduler: SubmitCheckMySimulation

  • 1 script to submit the RunMySimulation script to the scheduler: SubmitRunMySimulation

  • 1 script to control the running simulation or the simulation being checked: RunCommandMySimulation

  • 1 script to cancel the simulation job: CancelMySimulation

    The scripts contain variables among the provided list that are automatically replaced by the parameters defined in the Speos HPC interface.

  • 1 script to retrieve the job id: Parsejobid (useful for SLURM)

These six scripts are unique for each cluster. They must be placed in a shared folder with reading, writing and execute access rights to all computers and users on the domain.

To understand how to create the scripts, here are script template examples:


Note:  The examples provided are created to be read by a LSF scheduler. The scripts will depend on your scheduler.


2.2.2.1. Speos HPC Variables

The table presents the variables you can use to create the simulation script templates.

Table 3: List of Variables

Variable

Description

Can be used in

$(SPEOSHPC_EXE)

Full path to Speos HPC executable.

CheckMySimulation RunMySimulation RunCommandMySimulation

$(SV5PATH)

Full path to the exported simulation system.

CheckMySimulation RunMySimulation RunCommandMySimulation

$(FILE_OUTPUT)

Full path to the output file of scheduler.

CheckMySimulation RunMySimulation

$(FILE_ERROR)

Full path to the error file of scheduler.

CheckMySimulation RunMySimulation

$(FILE_CHECKKO)

Full path to the file named CHECKKO.

CheckMySimulation

$(FILE_RUNKO)

Full path to the file named RUNKO.

RunMySimulation

$(LANG)

String corresponding to code page converter.

CheckMySimulation RunMySimulation

$(JOBNAME)

Job name.

RunMySimulation

$(JOB_ID)

Job identifier.

CancelMySimulation

$(JOB_ID_PATH)

File path that will contain the job ID at the submitting time.

SubmitCheckMySimulation SubmitRunMySimulation

$(NODES)

Number of nodes used by simulation.

RunMySimulation

$(WALLCLOCK)

Maximum job time.

RunMySimulation

$(PARAM)

String containing simulation parameters.

RunMySimulation

$(SPEOSHPC_CMD)

Control command options (-merge, -stop 1, ...).

RunCommandMySimulation

$(PATH_CHECK_SCRIPT)

Full path to check script.

SubmitCheckMySimulation

$(PATH_RUN_SCRIPT)

Full path to run script.

SubmitRunMySimulation

$(MAIL_ADDRESS)

User email address.

SubmitCheckMySimulation SubmitRunMySimulation


2.2.2.2. Command Line Switches

Table 4: MPI Command Line Switches

MPI Command Line SwitchDescription
-launcherDefines the connection type to launch the mpiexec.hydra
-hostsDefines the list of the hosts that will run the jobs.
-mpiDefines which Message Passing Interface to use (example: IntelMPI, OpenMPI)
-n (or -np)Defines the total number of processes. Note: Speos HPC only uses one process.
-ppnDefines the number of process per nodes. Note: Speos HPC only uses one process per node.

For more information, refer to the Intel MPI documentation, or in the linux terminal enter the following command line: mpiexec -help.

Table 5: Speos Command Line Switches

Speos Command Line SwitchDescription
-speosDefines the input data to be computed by Speos HPC.
-threadsDefines the number of threads per node on which to run the job.
-gpuAllows you to run the simulation using the GPU resources of the cluster.

2.2.2.3. RunMySimulation Script

RunMySimulation.sh script is the template of the submission script to the scheduler. It contains scheduler commands, a call MPI environment loading and Speos HPC launch command.

In the following template example:

  • We want SPEOSHPC to use 1 task per node, that’s why we use (for PBS and Slurm respectively) :

    #PBS -l select=$(NODES):mpiprocs=1

    #PBS -l place=scatter

    And

    #SBATCH --ntasks-per-node=1

    #SBATCH --exclusive

  • If you want to run a simulation using the GPU of the cluster, you can add the -gpu command line as shown in the script below.

  • The -threads option will limit the number of threads per nodes if you want to use the maximum number of threads on your machines you could use –threads 999

2.2.2.3.1. PBS – RunMySimulation Script
#!/bin/bash
#PBS -N "$(JOBNAME)"
#PBS -l walltime=00:$(WALLCLOCK):00
#PBS -q main
#PBS -o "$(FILE_OUTPUT)"
#PBS -e "$(FILE_ERROR)"
#PBS -l select=$(NODES):mpiprocs=1
#PBS -l place=scatter


# Load the user environment
export ANSYSLMD_LICENSE_FILE=1055@LICENSE_SERVER
export TMPDIR=/tmp
export PBS_O_PATH=$PATH

source /ansys_inc/vXXX/commonfiles/MPI/Intel/2021.8.0/linx64/env/vars.sh

###############################
# SPEOSHPC command line options
###############################

# Distribute my SPEOSHPC simulation on 2 computers for 10 seconds in this example.
mpiexec.hydra -launcher ssh -np $(NODES) "$(SPEOSHPC_EXE)" -speos "$(SV5PATH)" $(PARAM) $(LANG) -threads 50 -mpi IntelMPI -gpu
# NOTE: if you want to run simulation using GPU of the cluster, add the -gpu switch in the command line.
# NOTE: the -threads command line is useless when using the -gpu switch in the command line.	


qstat --version
mpiexec -V | head -n 1
ldd --version | head -n 1
cat /etc/*release* | head -n 4
locale | head -n 1
2.2.2.3.2. SLURM – RunMySimulation Script
#!/bin/bash
#SBATCH -o "$(FILE_OUTPUT)"
#SBATCH -e "$(FILE_ERROR)"
#SBATCH -J "$(JOBNAME)"
#SBATCH -n $(NODES)
# spread the tasks evenly among the nodes
#SBATCH --ntasks-per-node=1
#SBATCH --exclusive
#SBATCH -t 00:$(WALLCLOCK):00

# Load the INTEL redistribuables

export ANSYSLMD_LICENSE_FILE=LICENSE_SERVER
export TMPDIR="/tmp"

###############################
# SPEOSHPC command line options
###############################
	
. /ansys_inc/vXXX/commonfiles/MPI/Intel/2021.8.0/linx64/env/vars.sh

hostlist=$(scontrol show node ${SLURM_JOB_NODELIST} | awk '/NodeAddr=/ {print $1}' | cut -f2 -d= | paste -sd ',' -)

mpiexec.hydra -launcher ssh -hosts ${hostlist} -ppn 1 "$(SPEOSHPC_EXE)" -speos "$(SV5PATH)" $(PARAM) $(LANG) -threads 64 -mpi IntelMPI -gpu
# NOTE: if you want to run simulation using GPU of the cluster, add the -gpu switch in the command line.
# NOTE: the -threads command line is useless when using the -gpu switch in the command line.	


sinfo –version
mpiexec -V | head -n 1
ldd --version | head -n 1
cat /etc/*release* | grep "PRETTY"
locale | head -n 1
2.2.2.4. SubmitRunMySimulation Script

SubmitRunMySimulation.sh script is the script submission command on the cluster.

# Submit job according to scheduler

2.2.2.4.1. PBS – SubmitRunMySimulation Script
# Submit job according to scheduler
qsub "$(PATH_RUN_SCRIPT)" > $(JOB_ID_PATH)
2.2.2.4.2. SLURM – SubmitRunMySimulation Script
# Submit job according to scheduler
sbatch "$(PATH_RUN_SCRIPT)" > $(JOB_ID_PATH)
2.2.2.5. CheckMySimulation Script

CheckMySimulation.sh script is the checking version of the RunMySimulation.sh script. The only difference is that you must set CheckMySimulation.sh script to run on only one node and activate the -check Speos HPC command which initializes and stops a simulation before running.

2.2.2.5.1. PBS – CheckMySimulation Script
#!/bin/bash
#PBS -N "$(JOBNAME)"
#PBS -l walltime=00:$(WALLCLOCK):00
#PBS -q main
#PBS -o "$(FILE_OUTPUT)"
#PBS -e "$(FILE_ERROR)"
#PBS -l select=1:mpiprocs=1
#PBS -l place=scatter


# Load the user environment
export ANSYSLMD_LICENSE_FILE=1055@LICENSE_SERVER
export TMPDIR=/tmp
export PBS_O_PATH=$PATH

source /ansys_inc/vXXX/commonfiles/MPI/Intel/2021.8.0/linx64/env/vars.sh

###############################
# SPEOSHPC command line options
###############################

# Distribute my SPEOSHPC simulation on 2 computers for 10 seconds in this example.
mpiexec.hydra -launcher ssh -np 1 "$(SPEOSHPC_EXE)" -speos "$(SV5PATH)" $(PARAM) $(LANG) -threads 50 -mpi IntelMPI -gpu
# NOTE: if you want to run simulation using GPU of the cluster, add the -gpu switch in the command line.
# NOTE: the -threads command line is useless when using the -gpu switch in the command line.	



	  qstat --version
        mpiexec -V | head -n 1
        ldd --version | head -n 1
        cat /etc/*release* | head -n 4
        locale | head -n 1
2.2.2.5.2. SLURM – CheckMySimulation Script
#!/bin/bash
#SBATCH -o "$(FILE_OUTPUT)"
#SBATCH -e "$(FILE_ERROR)"
#SBATCH -J "$(JOBNAME)"
#SBATCH -n 1
# spread the tasks evenly among the nodes
#SBATCH --ntasks-per-node=1
#SBATCH --exclusive
#SBATCH -t 00:$(WALLCLOCK):00

# Load the INTEL redistribuables

export ANSYSLMD_LICENSE_FILE=LICENSE_SERVER
export TMPDIR="/tmp"

	###############################
	# SPEOSHPC command line options
	###############################

. /ansys_inc/vXXX/commonfiles/MPI/Intel/2021.8.0/linx64/env/vars.sh

hostlist=$(scontrol show node ${SLURM_JOB_NODELIST} | awk '/NodeAddr=/ {print $1}' | cut -f2 -d= | paste -sd ',' -)

"/ansys_inc/vXXX/commonfiles/MPI/Intel/2021.8.0/linx64/bin/mpiexec.hydra" -launcher ssh -machinefile hosts -ppn 1 "$(SPEOSHPC_EXE)" -speos "$(SV5PATH)" $(PARAM) $(LANG) -threads 64 -mpi IntelMPI -gpu
# NOTE: if you want to run simulation using GPU of the cluster, add the -gpu switch in the command line.
# NOTE: the -threads command line is useless when using the -gpu switch in the command line.	


sinfo --version
mpiexec -V | head -n 1
ldd --version | head -n 1
cat /etc/*release* | grep "PRETTY"
locale | head -n 1
2.2.2.6. SubmitCheckMySimulation Script

SubmitCheckMySimulation.sh script is the script submission command on the cluster.

2.2.2.6.1. PBS – SubmitCheckMySimulation Script
# Submit job according to scheduler
qsub "$(PATH_CHECK_SCRIPT)" > $(JOB_ID_PATH)
2.2.2.6.2. SLURM – SubmitCheckMySimulation Script
# Submit job according to scheduler
sbatch "$(PATH_CHECK_SCRIPT)"
2.2.2.7. RunCommandMySimulation Script

RunCommandMySimulation.sh script is used to run Speos HPC control commands.

You do not need to submit it to the scheduler.

You can execute it only on the login node, as it only send a command to Speos HPC which is running a simulation.

2.2.2.7.1. PBS – RunCommandMySimulation Script
# Load the user environment

# Run control command of SPEOSHPC
#./SPEOSHPC.x -sv5 "../SV5/DirectSimulation/LG_50M_Colorimetric.sv5/LG_50M_Colorimetric.sv5" -stop 0
"$(SPEOSHPC_EXE)" -sv5 "$(SV5PATH)" $(SPEOSHPC_CMD)
2.2.2.7.2. SLURM – RunCommandMySimulation Script
# Load the INTEL redistribuables
. /ansys_inc/vXXX/commonfiles/MPI/Intel/2021.8.0/linx64/env/vars.sh

export ANSYSLMD_LICENSE_FILE=LICENSE_SERVER
export TMPDIR="/tmp"

#./SPEOSHPC.x -sv5 "../SV5/DirectSimulation/LG_50M_Colorimetric.sv5/LG_50M_Colorimetric.sv5" -stop 0
"$(SPEOSHPC_EXE)" -sv5 "$(SV5PATH)" $(SPEOSHPC_CMD)  
2.2.2.8. CancelMySimulation

CancelMySimulation.sh script is used to cancel a pending or a starting job.

2.2.2.8.1. PBS – CancelMySimulation Script
qdel $(JOB_ID)
2.2.2.8.2. SLURM – CancelMySimulation Script
scancel $(JOB_ID)
2.2.2.9. ParseJobID

ParseJobID allows to retrieve only the job id and create a *.txt file with only the job id provided, to be then used in the other scripts.

ParseJobID is useful for SLURM scheduler.

# parse output of scheduler submitting command to get job id
sed -i "s~Submitted batch job ~~" $(JOB_ID_PATH)

2.2.3. Speos HPC on Linux using Scripts

To run a simulation in a Linux environment using scripts and command lines, download and install an emulator and a SFTP client on workstations that will need to communicate with the cluster machine.

  • PuTTY allows you to connect to a Linux machine from a Windows machine, using a secured connection, and emulates a Linux console.

  • WinSCP is a SFTP client for Windows allowing you to transfer securely files from a local machine to a remote machine.

2.2.3.1. Configuring PuTTY
  1. Ask your Information Technician for the login access of the cluster Head Node (Host Name or IP address, and username).

  2. In the PuTTY Configuration window, in the Session category, specify the Head Node Host Name (or IP address) of the cluster.

    If the connection type that the Information Technician configured is SSH, you may require Public and Private Keys to access the Head Node.

  3. In the Connection category, in the Data sub-category, specify the Auto-login username.

  4. In the Session category, in the Saved Sessions field, name the session and click Save.

2.2.3.2. Configuring WinSCP

Once you have configured PuTTY, configure WinSCP to transfer the files between the Head Node and your computer.

  1. In the Login - WinSCP window, click Tools and select Import Sites.

  2. Select the Import from PuTTY.

  3. Select the site corresponding to the cluster.

    The site corresponds to the session you configured on PuTTY.

  4. Click OK.