Using the RSM Utilities application you can manually create, delete and list configurations and queues.
A configuration contains information about the HPC resource to which jobs will be submitted, and how RSM will work with the resource.
RSM queues are the queues that users will see in client applications when submitting jobs to RSM. Each RSM queue maps to an HPC queue and configuration.
The RSM Utilities application provides a way of manually creating an RSM configuration. For information on creating a configuration using the RSM Configuration application, and the settings specified in a configuration, see Specifying RSM Configuration Settings.
Configurations are saved to .rsmcc files in the RSM configuration directory. To determine the location of this directory, refer to Specifying a Directory for RSM Configuration Files.
To manually create a configuration, run the appropriate command below, appending options from the accompanying table to specify configuration settings:
Windows: Run the following command in the [RSMInstall]\bin directory:
rsm.exe config create cluster -type [hpc type]
Linux: Run the following command in the [RSMInstall]/Config/tools/linux directory:
rsmutils config create cluster -type [hpc type]
For the [hpc type], specify one of the following to create a configuration for that HPC type: default (see Example 7.1: Default Ansys RSM Cluster (ARC) Configuration), ARC | LSF | PBS | SLURM | SGE | UGE | MSHPC.
Table 7.1: Options for Creating an RSM Configuration
Option (Windows | Linux) | Usage |
---|---|
-name | -n name | The name of the configuration as it appears in the list of configurations. Defaults to the specified HPC type. |
-rsmQueue | -rq name | The name of the RSM queue with which this configuration and the HPC queue will be associated. Defaults to the name of the HPC queue. |
-clusterQueue | -cq name | The name of the HPC queue to which the RSM queue and configuration will map. Required except for Ansys RSM Cluster (ARC) and Microsoft HPC (MSHPC) configurations. |
-submitHost | -sh machine | The machine name of the cluster submit host. Defaults to 'localhost'. |
-sshAccount|-ssh account | If SSH will be used for communication between a Windows RSM client and a Linux cluster submit host, this specifies the account to use on the remote SSH submit host. Password-less SSH is required. |
-platform | -p win | lin | The platform of the cluster submit host (Windows or Linux). This is always required. |
-transferType | -tt NO | RSM | OS | SCP | Specify how files will get to the HPC staging directory. NO = No file transfer needed. Client files will already be in an HPC staging directory. RSM = RSM uses TCP sockets to stream files from the client machine to the submit host. Use when the HPC staging directory is in a remote location that is not visible to client machines. OS = RSM finds the HPC staging directory via a Windows network share or Linux mount point, and copies files to it using the built-in operating system copy commands. Use when the HPC staging directory is a shared location that client machines can access. SCP = SSH/SCP will be used to transfer files from the client machine to the submit host. |
-stagingDir | -sd path | The HPC staging directory as the RSM client sees it. A Windows client will
see the shared file system as a UNC path (for example,
\\machine\shareName ). A Linux client may mount the HPC
staging directory such that the path appears different than it does on the cluster
(for example, /mounts/cluster1/staging ). Leave empty if
using the no-file-transfer method. |
-stagingMapDirs | -sdmap path;path;... |
The path to the shared file system as the cluster sees it (for example,
Multiple paths are only supported when all of the following are true:
|
-localScratch | -ls path | Local scratch path if jobs will run in a scratch directory local to the execution node. Leave empty to run jobs in the HPC staging directory. |
-scratchUnc | -su path | (Windows clusters only): UNC share path of -localScratch path not including the '\\machine\' portion. |
-peSmp | -ps name | (UGE/SGE only): Parallel Environment (PE) names for Shared Memory Parallel. If not specified, default will be 'pe_smp'. |
-peMpi | -pm name | (UGE/SGE only): Parallel Environment (PE) names for Distributed Parallel. If not specified, default will be 'pe_mpi'. |
-noCleanup | -nc | Keep job files in the HPC staging directory after the job has run. |
Use the examples below as a guide when generating configuration files.
Example 7.1: Default Ansys RSM Cluster (ARC) Configuration
The default Ansys RSM Cluster (ARC) configuration is used to submit jobs to the local machine (on which the RSM configuration resides). Every RSM installation has a basic ARC cluster already configured.
Running the command rsm.exe | rsmutils config create cluster -type
default
is the equivalent of running rsm.exe | rsmutils config
create cluster -type ARC -name localhost -rq Local
, where the name of the
RSM configuration is localhost
, and the RSM queue that is
associated with this configuration is named Local
.
A file named LOCALHOST.rsmcc is generated that contains the following settings:
<?xml version="1.0" encoding="utf-8"?> <ClusterConfiguration version="2"> <name>localhost</name> <type>ARC</type> <machine>localhost</machine> <submitHostPlatform>AllWindows</submitHostPlatform> <stagingDirectory /> <networkStagingDirectory /> <localScratchDirectory /> <fileCopyOption>None</fileCopyOption> <nativeSubmitOptions /> <useSsh>False</useSsh> <sshAccount /> <useSshForLinuxMpi>True</useSshForLinuxMpi> <deleteStagingDirectory>True</deleteStagingDirectory> <readonly>False</readonly> </ClusterConfiguration>
Example 7.2: LSF on Linux (Jobs Use Local Scratch Directory)
To configure RSM to use an LSF queue, and run jobs in the local scratch directory, you would run the following command:
rsmutils config create cluster -type LSF -name LSFSCRATCH -submitHost
lsfheadnode -platform lin -localScratch /rsmtmp -rsmQueue LSF-SCRATCH -clusterQueue
normal
The following arguments are used in this example:
LSF = cluster type is LSF
-name LSFSCRATCH = configuration name will be
LSFSCRATCH
-submitHost lsfheadnode = machine name of LSF cluster head node is
lsfheadnode
-platform lin = platform of cluster submit host is Linux
-localScratch /rsmtmp = jobs will run in a local scratch directory
/rsmtmp
-rsmQueue LSF-SCRATCH = RSM queue name will be
LSF-SCRATCH
-clusterQueue normal = LSF cluster queue name is
normal
An LSFSCRATCH.rsmcc file is created which contains the following settings:
<?xml version="1.0" encoding="utf-8"?> <ClusterConfiguration version="1"> <name>LSFSCRATCH</name> <type>LSF</type> <machine>lsfheadnode</machine> <submitHostPlatform>AllLinux</submitHostPlatform> <stagingDirectory /> <networkStagingDirectory /> <localScratchDirectory>/rsmtemp</localScratchDirectory> <fileCopyOption>None</fileCopyOption> <nativeSubmitOptions /> <useSsh>False</useSsh> <sshAccount /> <useSshForLinuxMpi>True</useSshForLinuxMpi> <deleteStagingDirectory>True</deleteStagingDirectory> <readonly>False</readonly> </ClusterConfiguration>
In this example, an RSM queue name (LSF-SCRATCH
) is
specified. This will be the queue name displayed in client applications. If an RSM
queue name is not included in the command line (for example, -rsmQueue
LSF-SCRATCH
), the actual cluster queue name will be displayed
instead.
If you were to open the queues.rsmq file, you would see the
LSF-SCRATCH
queue added there:
<?xml version="1.0" encoding="utf-8"?> <Queues> <Queue version="1"> <name>LSF-SCRATCH</name> <clusterConfigurationName>LSFSCRATCH</clusterConfigurationName> <clusterQueueName>normal</clusterQueueName> <enabled>True</enabled> </Queue> </Queues>
The clusterConfigurationName
value,
LSFSCRATCH
in this example, is what links the queue to the
actual RSM configuration.
Example 7.3: SGE on Linux (Jobs Use Cluster Staging Directory)
2 cluster queues: all.q
and
sgeshare1
In the RSM client, the SGE queue sgeshare1
is referred to
as SGE_SHARE. all.q
is not aliased and is
referred to as all.q.
Local scratch setup: Jobs submitted to RSM
all.q queue will run in a local scratch folder
/rsmtmp
.
rsmutils config create cluster -type UGE -name SGELOCAL -localScratch
/rsmtmp -peMpi myPE -clusterQueue all.q
No local scratch. Jobs submitted to RSM’s SGE_SHARE queue will run in shared cluster staging directory.
rsmutils config create cluster -type SGE -name SGESHARE -peSmp myPE
-clusterQueue sgeshare1 -rsmQueue SGE_SHARE
Note that you can specify UGE
or SGE
for config create
. They are the same.
Example 7.4: Microsoft Windows HPC on Windows Server 2019
HPC does not define named queues.
We define 2 queues in RSM: HPC-LOCAL and HPC-SHARE.
Local scratch setup. Jobs submitted to RSM’s
HPC-SCRATCH queue will run in a local scratch folder
C:\RSMTemp
. Note that the cluster nodes will all share this
folder as \\[ExecutionNode]\RSMTemp
.
rsm.exe config create cluster -type MSHPC -name HPCSCRATCH -localscratch
C:\RSMTemp -scratchUnc RSMTemp -rsmQueue HPC-SCRATCH
No local scratch. Jobs submitted to RSM’s HPC-SHARE queue will run in shared cluster staging directory.
rsm.exe config create cluster -type MSHPC -name HPCSHARE -rsmQueue
HPC-SHARE
To delete a configuration from the RSM configuration directory:
Windows: Run the following command in the [RSMInstall]\bin directory:
rsm.exe config delete -clusterconfig |- cc
clusterConfigurationName
Linux: Run the following command in the [RSMInstall]/Config/tools/linux directory:
rsmutils config delete -clusterconfig |- cc
clusterConfigurationName
When creating an RSM queue you must associate the queue with an HPC queue. The RSM queue is what users see in client applications such as Workbench. The HPC queue is defined on the HPC side (for example, on the cluster submit host).
To create an RSM queue:
Windows: Run the following command in the [RSMInstall]\bin directory:
rsm.exe config create queue -name queueName -clusterconfig
clusterConfigurationName -clusterQueue clusterQueueName
Linux: Run the following command in the [RSMInstall]/Config/tools/linux directory:
rsmutils config create queue -n queueName -cc clusterConfigurationName -cq
clusterQueueName
To delete an RSM queue:
Windows: Run the following command in the [RSMInstall]\bin directory:
rsm.exe config delete -rsmqueue | -rq rsmQueueName
Linux: Run the following command in the [RSMInstall]/Config/tools/linux directory:
rsmutils config delete -rsmqueue | -rq rsmQueueName
The RSM configuration directory contains RSM configurations and queue definitions.
To list RSM configurations and queues:
Windows: Run the following commands in the [RSMInstall]\bin directory:
All configurations: rsm.exe config
list
The following is a sample listing:
C:\Program Files\ANSYS Inc\v242\RSM\bin>rsm.exe config list Configuration location: C:\Users\atester\AppData\Roaming\Ansys\v242\RSM Queues: Default [WinHPC Cluster, Default] High_mem [ARC, high_mem] LM-WinHPC [WinHPC, LM] LSF-SCRATCH [LSFSCRATCH, normal] Local [localhost, local] XLM-WinHPC [WinHPC, XLM] Configurations: ARC localhost LSFSCRATCH WinHPC
Specific configuration: rsm.exe config list -cc
ConfigurationName
The following is a sample listing:
C:\Program Files\ANSYS Inc\v242\RSM\bin>rsm.exe config list -cc LSFSCRATCH Configuration location: C:\Users\atester\AppData\Roaming\Ansys\v242\RSM Showing single configuration LSFSCRATCH Queues: LSF-SCRATCH [LSFSCRATCH, normal] <ClusterConfiguration version="2"> <name>LSFSCRATCH</name> <type>LSF</type> <machine>lsfheadnode</machine> <submitHostPlatform>allLinux</submitHostPlatform> <stagingDirectory /> <networkStagingDirectory /> <localScratchDirectory>/rsmtmp</localScratchDirectory> <fileCopyOption>None</fileCopyOption> <nativeSubmitOptions /> <useSsh>False</useSsh> <sshAccount /> <useSshForLinuxMpi>True</useSshForLinuxMpi> <deleteStagingDirectory>True</deleteStagingDirectory> <readonly>False</readonly> </ClusterConfiguration>
Linux: Run the following commands in the [RSMInstall]/Config/tools/linux directory:
All configurations: rsmutils config
list
Specific configuration: rsmutils config list -cc
clusterConfigurationName