Glossary

Ansys RSM Cluster (ARC)

The built-in cluster type provided by RSM. An ARC cluster operates in the same way that a commercial cluster does, running Ansys applications in local or distributed mode, but uses its own scheduling capability rather than a third-party job scheduler.

client application

A client application is the Ansys application run on the local RSM client machine and is used to submit jobs to RSM. Examples include Ansys Workbench, Ansys Fluent, Ansys CFX, and so on.

client-side integration

A client-side integration is a custom integration scenario in which RSM functionality is replaced by the 3rd-party scripts. Only a thin layer of the RSM architecture is involved, in order to provide the APIs for execution of the custom scripts, which are located on the client machine.

cluster

A cluster is a group of computers connected through a network to work as a centralized data processing resource. Jobs submitted to a cluster are managed by a queueing system to make optimal use of all available resources.

HPC queue

An HPC queue determines the machine(s) on which jobs will run when jobs are submitted to that queue. HPC queues are defined on the HPC side (for example, on a cluster submit host), and can be imported into the RSM Configuration application so that you can map them to RSM queues when defining configurations.

HPC staging directory

The HPC staging directory is the directory in which job input files are placed by the client application when a job is submitted to RSM. When defining a configuration in RSM, you specify whether the job will execute in the HPC staging directory, or in a local scratch directory on the execution node(s). If you choose the former option, the HPC staging directory will also serve as the job execution directory.

cluster-side integration

Cluster-side integration is a custom integration scenario in which RSM is used to submit solve jobs to a remote cluster (either supported or unsupported). In this scenario you are running in non-SSH mode (RSM is able to directly submit jobs to the cluster).

code template

A code template is an XML file containing code files (for example, C#, VB, JScript), references, and support files required by a job.

custom cluster integration

A custom cluster integration refers to the mechanism provided by RSM that allows third parties to use custom scripts to perform the tasks needed to integrate Ansys Workbench with the cluster. Both client-side and cluster-side customizations are possible.

daemon services

Daemon services are scripts or programs that run persistently in the background of the machine, and which are usually executed at startup. It is recommended that you install the RSM launcher service as a daemon service. This allows the launcher service to be started automatically without rebooting. The next time the machine is rebooted, the installed service will be started automatically.

execution node

An execution node is a machine in a cluster that actually executes jobs that have been submitted. Jobs are distributed from the cluster head node/submit host to be run on available execution nodes.

head node

The head node is the machine in a cluster that is configured as the control center for communications between RSM and the cluster. Typically it serves as the submit host and distributes jobs across the cluster for execution.

job

A job consists of a job template, a job script, and a processing task submitted from a client application such as Ansys Workbench. An example of a job is the update of a group of design points for an Ansys Mechanical simulation.

job execution directory

The job execution directory is the solver working directory. If you specify that jobs will run in the HPC staging directory, the HPC staging directory will serve as the job execution directory. If you specify that jobs will run in a local scratch directory on the execution node(s), job input files will be transferred from the HPC staging directory to the local scratch directory, and files generated by the job will be transferred to the HPC staging directory so that client applications can access them.

job script

A job script is a component of an RSM job. It runs an instance of the client application on the execution node used to run the processing task.

job template

A job template is a component of an RSM job. It is an XML file that specifies input and output files of the client application.

LSF

IBM® Spectrum LSF is a batch queuing system supported by RSM.

non-root privileges

Non-root privileges give the user a limited subset of administrative privileges. With RSM, non-root privileges are conferred by an rsmadmin account (that is, membership to the rsmadmins user group. It is recommended that non-root privileges are used for starting and running the RSM launcher service.

OS Copy

OS Copy is a method of file transfer provided by RSM which allows for full utilization of the network bandwidth and uses direct access to directories across machines.

parallel processing

In parallel processing, jobs are executed on multiple CPU cores simultaneously.

parallel environment (PE)

A parallel environment allows for parallel execution of jobs. By default, RSM is configured to support Shared Memory Parallel and Distributed Parallel environments for SGE clusters.

PBS Pro

Altair PBS Professional is a batch queuing system supported by RSM.

queue

A queue is a list of execution hosts that are suited to run a particular class of jobs. When you submit a job to RSM, you submit it to an RSM queue, which maps to an HPC queue. The HPC queue determines when and where the job will run based on resource requests and current available resources. Queue definitions are part of the configurations that are defined in RSM.

root privileges

Root privileges give the user administrative access to all commands and files on a Linux system. It is recommended that root privileges are not used for starting and running the RSM launcher service.

RSM Admins group

The RSM Admins group is a Windows user group that confers administrative privileges for RSM. Also refers to the privileges conferred on members of this group (that is, “RSM Admins privileges”).

RSM client

The RSM client is the local machine from which RSM jobs are submitted to an HPC resource. It runs both RSM and a client application such as Ansys Workbench.

RSM configuration

A set of properties defined in RSM which specify information about an HPC resource, and how RSM will communicate with that resource. For example, it specifies the network name of a cluster submit host, file transfer method to be used, and RSM queues. Configurations are saved in .rsmcc files. If you store configurations in a shared location, RSM users can retrieve them and use them on their own machines.

RSM queue

You define RSM queues when you define configurations in the RSM Configuration application. When users submit jobs to RSM in client applications, they submit them to RSM queues. Each RSM queue maps to a specific RSM configuration and HPC queue. RSM queue definitions are saved as .rsmq files.

rsmadmin user account

An rsmadmin user account is a Linux account with membership in the rsmadmins user group; as such, the account has RSM administrative privileges.

rsmadmins user group

The rsmadmins user group is a Linux user group that confers administrative privileges for RSM.

scratch directory

Using a scratch directory is the practice of storing solver files in a local directory on the execution node(s). Recommended to optimize performance when there is a slow network connection between execution nodes and the HPC staging directory, or when the solver used produces many relatively large files.

serial processing

In serial processing, jobs are executed on only one CPU core at a time.

SSH

Secure Shell is a network protocol providing a secure channel for the exchange of data between networked devices. RSM can use SSH for cross-platform communications, but native mode is the recommended method.

submit host

The submit host is the machine or cluster node that performs job scheduling. In most cases, the cluster submit host is a remote machine, but it can also be your local machine ("localhost").

UGE

Univa Grid Engine is a batch queuing system that is now called Altair Grid Engine.