31.4.2. Linux Cluster

One assumes that Ansys Polyflow has been installed on the cluster and the user is familiar with Slurm. In order to submit a job, the user must create a Slurm file as described below.

## Slurm variables setup.
#!/bin/bash
##SBATCH --account=<user name> 
#SBATCH --mail-user=<email address> 
#SBATCH --mail-type=ALL
#SBATCH --job-name=<job name> 
#SBATCH --error=%x.out
#SBATCH --partition=<cluster partition> 
#SBATCH --time=<limit of time for the job> 
#SBATCH --ntasks=< number of processes>
#SBATCH --ntasks-per-node=< number of processes per node> 
#SBATCH --cpus-per-task=< number of CPUs per task>
## print job start time and slurm resources
date
echo "SLURM_JOB_ID		: "$SLURM_JOB_ID
echo "SLURM_JOB_NODELIST       : "$SLURM_JOB_NODELIST 
echo "SLURM_JOB_NUM_NODES      : "$SLURM_JOB_NUM_NODES 
echo "SLURM_NODELIST           : "$SLURM_NODELIST 
echo "SLURM_NTASKS_PER_NODE    : "$SLURM_NTASKS_PER_NODE
echo "SLURM_CPUS_PER_TASK      : "$SLURM_CPUS_PER_TASK
echo "working directory        : "$SLURM_SUBMIT_DIR
## Environment variables setup.
## Polyflow Command
<Ansys Inc installation>/v242/polyflow/bin/polyflow_mpi -slurm -ID <data file>
-keyword FORCE_SOLVER=MUMPS -OUT <listing file>

There are 3 sections in this file:

Slurm variables setup

The most important values to specify in the Slurm file are:

  • --ntasks=p: the total number of MPI processes,

  • --ntasks-per-node=q: the number of MPI processes per node,

  • --cpus-per-task=c: the number of cores per MPI process, i.e., the number of cores for solver and for matrix build.

From this, one deduces:

  • the number of nodes involved in the computation: n = p/q,

  • the total number of cores involved in the computation: NCPUs = p × c.

Environment variables setup

In this section, set the environment variables:

export<ENVIRONMENT VARIABLE>=<VALUE>

Command

Set the command to launch Ansys Polyflow as usually with the Polyflow arguments - ID, OUT, …

<Ansys_Inc Installation>/v242/polyflow/bin/polyflow_mpi -slurm

-keyword FORCE_SOLVER=MUMPS -ID <data file> -OUT <listing file>

However, there are mandatory or useless arguments.

The mandatory arguments are:

-slurm: means that the MPI options come from the Slurm variables. Therefore, the user does not have to specify the MPI arguments.

The MUMPS solver must be used. If MUMPS solver has not been selected during the Polydata session, you must either add the keyword FORCE_SOLVER MUMPS in the p3rc file or add the argument - keyword FORCE_SOLVER=MUMPS in the command.

The useless arguments are:

-th: automatically set to the number of cores per task. In order to submit the job, type the command:

sbatch <slurm file>