1. Setting up a DMP Analysis

This section describes the prerequisites, including software requirements, for running a DMP analysis and the steps necessary to set up the environment for DMP processing.

1.1. Prerequisites for Running a DMP Analysis

Whether you are running on a single machine or multiple machines, the following condition is true:

  • By default, a DMP analysis uses four cores and does not require any HPC licenses. Additional licenses are needed to run a distributed solution with more than four cores. Several HPC license options are available. For more information, see HPC Licensing.

If you are running on a single machine, there are no additional requirements for running a distributed solution.

If you are running across multiple machines (for example, a cluster), your system must meet these additional requirements to run a distributed solution.

  • Homogeneous network: All machines in the cluster must be the same type, OS level, chip set, and interconnects.

  • You must be able to remotely log in to all machines, and all machines in the cluster must have identical directory structures (including the Ansys 2024 R2 installation, MPI installation, and working directories). Do not change or rename directories after you've launched Mechanical APDL. For more information, see Directory Structure Across Machines.

  • All machines in the cluster must have Ansys 2024 R2 installed, or must have an NFS mount to the Ansys 2024 R2 installation. If not installed on a shared file system, Ansys 2024 R2 must be installed in the same directory path on all systems.

  • All machines must have the same version of MPI software installed and running. The table below shows the MPI software and version level supported for each platform.

1.1.1. MPI Software

The MPI software supported for DMP processing depends on the platform (see the table below).

The files needed to use Intel MPI, MS MPI, or Open MPI are included on the installation media and are installed automatically when you install Ansys 2024 R2. Therefore, when running on a single machine (for example, a laptop, a workstation, or a single compute node of a cluster) on Windows or Linux, or when running on a Linux cluster, no additional software is needed. However, when running on multiple Windows machines you must use a cluster setup, and you must install the MPI software separately (see Installing the Software later in this section).

Table 2: Platforms and MPI Software

Platform MPI Software[a]
Linux

Intel MPI 2021.11.0 (default)[b]

Intel MPI 2018.3.222 (optional)

Open MPI 4.0.5 (optional)[c]

Windows 10 or Windows 11

Windows Server 2022 or Windows Server 2019

(single machine)

Intel MPI 2021.11.0 (default)

MS MPI v10.1.12 (optional)

Windows Server 2019 (cluster)

Windows Server 2019 HPC Pack [d]

[a] Ansys chooses the default MPI based on robustness and performance. The optional MPI versions listed here are available if necessary for troubleshooting (see Using an alternative MPI version).

[b] If Mechanical APDL detects a Mellanox OFED driver version older than 5.0 or an AMD cluster with InfiniBand using any OFED, then Intel MPI 2018.3.222 is automatically set as the default. If it detects an AMD cluster with Amazon EFA, then the default is set to Open MPI 4.0.5 instead.

[c] Mellanox OFED driver version 4.4 or higher is required.

[d] If you are running across multiple Windows machines, you must use Microsoft HPC Pack (MS MPI) and the HPC Job Manager to start a DMP run (see Starting a DMP Analysis).


By default, Open MPI is launched with a set of default runtime parameters that are chosen to improve performance, particularly on machines with AMD-based processors.

1.1.2. Installing the Software

Install Ansys 2024 R2 following the instructions in the Ansys, Inc. Installation Guide for your platform. Be sure to complete the installation, including all required post-installation procedures.

To run on a cluster, you must:

  • Install Ansys 2024 R2 on all machines in the cluster, in the exact same location on each machine.

  • For Windows, you can use shared drives and symbolic links. Install Ansys 2024 R2 on one Windows machine (for example, C:\Program Files\ANSYS Inc\V242) and then share that installation folder. On the other machines in the cluster, create a symbolic link (at C:\Program Files\ANSYS Inc\V242) that points to the UNC path for the shared folder. On Windows systems, you must use the Universal Naming Convention (UNC) for all file and path names for the DMP analysis to work correctly.

  • For Linux, you can use the exported NFS file systems. Install Ansys 2024 R2 on one Linux machine (for example, at /ansys_inc/v242), and then export this directory. On the other machines in the cluster, create an NFS mount from the first machine to the same local directory (/ansys_inc/v242).

Installation files for MS MPI on Windows

Microsoft MPI is installed and ready for use as part of the Ansys 2024 R2 installation, and no action is needed if you are running on a single machine. If you require MS MPI on another machine, the installer can be found at C:\Program Files\ANSYS Inc\V242\commonfiles\MPI\Microsoft\10.1.12498.18\Windows\MSMpiSetup.exe

Microsoft HPC Pack 2019 (Windows Server 2019)

You must complete certain post-installation steps before running a DMP analysis on a Microsoft HPC Server 2019 system. The post-installation instructions provided below assume that Microsoft Windows Server 2019 and Microsoft HPC Pack (which includes MS MPI) are already installed on your system. The post-installation instructions can be found in the following README files:

C:\Program Files\ANSYS Inc\V242\commonfiles\MPI\WindowsHPC\README.mht

or

C:\Program Files\ANSYS Inc\V242\commonfiles\MPI\WindowsHPC\README.docx

Microsoft HPC Pack examples are also located in C:\Program Files\ANSYS Inc\V242\commonfiles\MPI\WindowsHPC. Jobs are submitted to the Microsoft HPC Job Manager either from the command line or the Job Manager GUI.

To submit a job via the GUI, go to Start> All Programs> Microsoft HPC Pack> HPC Job Manager. Then click on Create New Job from Description File.

1.2. Setting Up the Cluster Environment for DMP

After you've ensured that your cluster meets the prerequisites and you have Ansys 2024 R2 and the correct version of MPI installed, you need to configure your distributed environment using the following procedure.

  1. Obtain the machine name for each machine on the cluster.

    Windows 10 (or Windows 11) and Windows Server 2019:

    From the Start menu, pick Settings >System >About. The full computer name is listed under PC Name. Note the name of each machine (not including the domain).

    Linux:

    Type hostname on each machine in the cluster. Note the name of each machine.

  2. Linux only: First determine if the cluster uses the secure shell (ssh) or remote shell (rsh) protocol.

    • For ssh: Use the ssh-keygen command to generate a pair of authentication keys. Do not enter a passphrase. Then append the new public key to the list of authorized keys on each compute node in the cluster that you wish to use.

    • For rsh: Create a .rhosts file in the home directory. Add the name of each compute node you wish to use on a separate line in the .rhosts file. Change the permissions of the .rhost file by issuing: chmod 600 .rhosts. Copy this .rhosts file to the home directory on each compute node in the cluster you wish to use.

    Verify communication between compute nodes on the cluster via ssh or rsh. You should not be prompted for a password. If you are, correct this before continuing. Note, all compute nodes must be able to communicate with all other compute nodes on the cluster (or at least all other nodes within the same partition/queue as defined by a job scheduler) via rsh or ssh without being prompted for a password. For more information on using ssh/rsh without passwords, search online for "Passwordless SSH" or "Passwordless RSH", or see the man pages for ssh or rsh.

  3. Windows only: Verify that all required environment variables are properly set. If you followed the post-installation instructions described above for Microsoft HPC Pack (Windows HPC Server), these variables should be set automatically.

    On the head compute node, where Ansys 2024 R2 is installed, check these variables:

    ANSYS242_DIR=C:\Program Files\ANSYS Inc\v242\ansys

    ANSYSLIC_DIR=C:\Program Files\ANSYS Inc\Shared Files\Licensing

    where C:\Program Files\ANSYS Inc is the location of the product install and C:\Program Files\ANSYS Inc\Shared Files\Licensing is the location of the licensing install. If your installation locations are different than these, specify those paths instead.

    On Windows systems, you must use the Universal Naming Convention (UNC) for all Ansys, Inc. environment variables on the compute nodes for DMP to work correctly.

    On the compute nodes, check these variables:

    ANSYS242_DIR=\\head_node_machine_name\ANSYS Inc\v242\ansys

    ANSYSLIC_DIR=\\head_node_machine_name\ANSYS Inc\Shared Files\Licensing

  4. Windows only: Share out the ANSYS Inc directory on the head node with full permissions so that the compute nodes can access it.

1.2.1. Optional Setup Tasks

The tasks explained in this section are optional. They are not required to run a DMP analysis correctly, but they may be useful for achieving the most usability and efficiency, depending on your system configuration.

On Linux systems, you can also set the following environment variables:

  • ANSYS_NETWORK_START - This is the time, in seconds, to wait before timing out on the start-up of the client (default is 15 seconds).

  • ANSYS_NETWORK_COMM - This is the time to wait, in seconds, before timing out while communicating with the client machine (default is 5 seconds).

  • ANS_SEE_RUN_COMMAND - Set this environment variable to 1 to display the actual command issued by the program.

On Windows systems, you can set the following environment variables to display the actual command issued by the program:

  • ANS_SEE_RUN = TRUE

  • ANS_CMD_NODIAG = TRUE

1.2.2. Using the mpitest Program

The mpitest program performs a simple communication test to verify that the MPI software is set up correctly. The mpitest program should start without errors. If it does not, check your paths and permissions; correct any errors and rerun.

When running the mpitest program, you must use an even number of processes. We recommend you start with the simplest test between two processes running on a single node. This can be done via the procedures outlined here for each platform and MPI type.

The command line arguments -np, -machines, and -mpifile work with the mpitest program in the same manner as they do in a DMP analysis (see Starting a DMP Analysis via Command Line).

On Linux:

For Intel MPI (default), issue the following command:

mpitest242 -np 2

which is equivalent to:

mpitest242 -machines machine1:2

For Open MPI, issue the following command:

mpitest242 -mpi openmpi -np 2

which is equivalent to:

mpitest242 -mpi openmpi -machines machine1:2

On Windows:

For Intel MPI (default), issue the following command.

ansys242 -np 2 -mpitest

1.2.3. Interconnect Configuration

Using a slow interconnect reduces the performance you experience in a distributed parallel simulation. For optimal performance, we recommend an interconnect with a high communication bandwidth (2000 megabytes/second or higher) and a low communication latency (5 microseconds or lower). This is due to the significant amount of data that must be transferred between processes during a distributed parallel simulation.

Distributed-memory parallelism is supported on the following interconnects. Not all interconnects are available on all platforms; see the Platform Support section of the Ansys Website for a current list of supported interconnects. Other interconnects may work but have not been tested.

  • InfiniBand (recommended)

  • Omni-Path (recommended)

  • GigE

On Windows x64 systems, use the Network Wizard in the Compute Cluster Administrator to configure your interconnects. See the Compute Cluster Pack documentation for specific details on setting up the interconnects. You may need to ensure that Windows Firewall is disabled for distributed-memory parallelism to work correctly.