4.2. Starting a DMP Analysis

After you've completed the configuration steps, you can use several methods to start a DMP anlaysis: We recommend that you use the Mechanical APDL Product Launcher to ensure the correct settings. All methods are explained here.

Notes on Running a DMP Analysis:

You can use an NFS mount to the Ansys 2024 R2 installation on Linux, or shared folders on Windows. However, we do not recommend using either NFS-mounts or shared folders for the working directories. Doing so can result in significant declines in performance.

Only the master process reads the config.ans file. A DMP analysis ignores the /CONFIG,NOELDB and /CONFIG,FSPLIT commands.

The program limits the number of processes used to be less than or equal to the number of physical cores on the machine. This is done to avoid running the program on virtual cores (for example, by means of hyperthreading), which typically results in poor per-core performance. For optimal performance, consider closing down all other applications before launching Mechanical APDL.

4.2.1. Starting a DMP Analysis via the Launcher

Use the following procedure to start a DMP analysis via the launcher.

  1. Open the Mechanical APDL Product Launcher:

    Windows 10 (or Windows 11) and Windows Server 2019:

    Start >Ansys 2024 R2 >Mechanical APDL Product Launcher 2024 R2

    Linux:
    launcher242
  2. Select the correct environment and license.

  3. Go to the High Performance Computing Setup tab. Select Use Distributed Computing (DMP).


    Note:  On Linux systems, you cannot use the Launcher to start the program in interactive (GUI) mode with distributed computing; this combination is blocked.


    Specify the MPI type to be used for this distributed run. MPI types include:

    • Intel MPI

    • Open MPI (Linux only)

    • MS MPI (Windows only)

    See the MPI table in the beginning of this chapter for the specific MPI version for each platform. On Windows, you cannot specify multiple hosts or an MPI file.

    Choose whether you want to run on a local machine, specify multiple hosts, or specify an existing MPI file (such as an Intel MPI configuration file or an Open MPI host file (on Linux)):

    • If local machine, specify the number of cores you want to use on that machine (indicated by the yellow arrow with -np in Figure 5.1: Product Launcher - High Performance Computing Setup).

    • If multiple hosts, use the New Host button to add machines to the Selected Hosts list.

    • If specifying an MPI file (Linux only), type in the full path to the file, or browse to the file. If typing in the path, you must use the absolute path.

    Additional Options for Linux Systems  —  On Linux systems, you can choose to use the remote shell (rsh) protocol instead of the secure shell (ssh) protocol; ssh is the default.

    Also in the launcher, you can select the Use launcher-specified working directory on all nodes option on the High Performance Computing Setup tab. This option uses the working directory as specified on the File Management tab as the directory structure on the head compute node and all other compute nodes. If you select this option, all machines will require the identical directory structure matching the working directory specified on the launcher.

  4. Click Run to launch Mechanical APDL.

4.2.2. Starting a DMP Analysis via Command Line

You can also start a DMP analysis via the command line using the following procedures.

Running on a Local Host

If you are running a DMP analysis locally (that is, running across multiple cores on a single machine), you need to specify the number of cores you want to use via the -np command line option:

ansys242 -dis -np n

(For a general discussion on processes, threads, and cores in parallel processing, see Specifying Processes, Threads, and Cores in SMP, DMP, and Hybrid Parallel in the overview section.)

If you are using Intel MPI, you do not need to specify the MPI software via the command line option. To specify an alternative MPI software, use the -mpi command line option. For example, if you run a job in batch mode on a local host using eight cores with an input file named input1 and an output file named output1, the launch commands for Linux and Windows would be as shown below.

On Linux:

ansys242 -dis -np 8 -b < input1 > output1  (for default Intel MPI 2021.11.0)

or

ansys242 -dis -mpi intelmpi2018 -np 8 -b < input1 > output1  (for Intel MPI 2018.3.222)

or

ansys242 -dis -mpi openmpi -np 8 -b < input1 > output1  (for Open MPI)

On Windows:

ansys242 -dis -np 8 -b -i input1 -o output1  (for default Intel MPI 2021.11.0)

or

ansys242 -dis -mpi msmpi -np 8 -b -i input1 -o output1  (for MS MPI)

Running on Multiple Hosts

If you are running a DMP analysis across multiple hosts, you need to specify the number of cores you want to use on each machine:

ansys242 -dis -machines machine1:np:machine2:np:machine3:np

On Linux, you may also need to specify the shell protocol used by the MPI software. DMP analyses use the secure shell protocol by default, but in some cluster environments it may be necessary to force the use of the remote shell protocol. This can be done via the -usersh command line option.

Consider the following examples which assume a batch mode run using two machines (using four cores on one machine and using two cores on the other machine), with an input file named input1 and an output file named output1. The launch commands for Linux and Windows would be as shown below.

On Linux:

ansys242 -dis -b -machines machine1:4:machine2:2 < input1 > output1  (for default Intel MPI 2021.11.0)

or

ansys242 -dis -mpi intelmpi2018 -b -machines machine1:4:machine2:2 < input1 > output1  (for Intel MPI 2018.3.222)

or

ansys242 -dis -mpi openmpi -b -machines machine1:4:machine2:2 < input1 > output1 (for Open MPI)

or

ansys242 -dis -b -machines machine1:4:machine2:2 -usersh < input1 > output1  (for remote shell protocol between compute nodes)

On Windows, users must launch jobs via the Windows HPC Job Manager.

Specifying a Preferred Parallel Feature License

If you have more than one HPC license feature, you can use the -ppf command line option to specify which HPC license to use for the parallel run. See HPC Licensing for more information.

4.2.3. Starting a DMP Analysis via the HPC Job Manager

If you are running a DMP analysis across multiple Windows x64 systems, you must start it using the Microsoft HPC Pack (MS MPI) and the HPC Job Manager. For more information, refer to the following README files:

Program Files\ANSYS Inc\V242\commonfiles\MPI\WindowsHPC\README.mht

or

Program Files\ANSYS Inc\V242\commonfiles\MPI\WindowsHPC\README.docx

4.2.4. Starting a DMP Solution in the Mechanical Application (via Ansys Workbench)

If you are running Ansys Workbench, you can start a a DMP solution in the Mechanical application. Go to Tools > Solve Process Settings; select the remote solve process you want to use and click the Advanced button. To enable distributed-memory parallel processing, ensure that Distribute Solution (if possible) is selected (this is the default). If necessary, enter any additional command line arguments to be submitted; the options are described in Starting a DMP Analysis via Command Line. If Distribute Solution (if possible) is selected, you do not need to specify the -dis flag on the command line.

If you are running a remote solution on multiple machines, use the -machines option to specify the machines and cores on which to run the job. If you are running a remote solution on one machine with multiple cores, specify the number of cores in the Max Number of Utilized Cores field; no command line arguments are needed.

For more information on running a distributed solution in Ansys Workbench, see Using Solve Process Settings.

4.2.5. Using MPI Files

You can specify an existing MPI file (such as an Intel MPI configuration file or an Open MPI hostfile) on the command line rather than typing out multiple hosts or a complicated command line:

ansys242 -dis -mpifile file_name

The format of the mpifile is specific to the MPI library being used. Refer to the documentation provided by the MPI vendor for the proper syntax.

If the file is not in the current working directory, you will need to include the full path to the file. The file must reside on the local machine.

You cannot use the -mpifile option in conjunction with the -np (local host) or -machines (multiple hosts) options.

Example MPI files for Intel MPI and Open MPI are shown in the following sections. For use on your system, modify the hostnames (mach1, mach2), input filename (inputfile), and output filename (outputfile) accordingly. Additional command-line arguments, if needed, can be added at the end of each line.

4.2.5.1. Intel MPI Configuration File Examples

Intel MPI uses a configuration file to define the machine(s) that will be used for the simulation. Typical Intel MPI configuration files are shown below for two nodes with two cores per node.

On Linux:

-host mach1 -np 2 /ansys_inc/v242/ansys/bin/ansysdis242 -dis -b -i inputfile -o outputfile
-host mach2 -np 2 /ansys_inc/v242/ansys/bin/ansysdis242 -dis -b -i inputfile -o outputfile

4.2.5.2. Open MPI Hostfile Example

Open MPI uses hostfiles to define the machine(s) that will be used for the simulation. Hostfiles are simple text files with one line for each machine (including the head compute node as well as all other compute nodes) that specifies the machine name and the number of cores (slots) to be used on each machine . The example below shows a typical hostfile for two machines, one running with 3 cores and the other with 2.

On Linux:

mach1 slots=3 
mach2 slots=2 

4.2.6. Directory Structure Across Machines

During a DMP run, the program writes files to the head compute node and all other compute nodes as the analysis progresses.

The working directory for each machine can be on a local drive or on a network shared drive. For optimal performance, use local disk storage rather than network storage. Set up the same working directory path structure on the head compute node and all other compute nodes.

When setting up your cluster environment to run a DMP analysis, consider that the program:

  • Cannot launch if identical working directory structures have not been set up on the head compute node and all other compute nodes.

  • Always uses the current working directory on the head compute node and expects identical directory structures to exist on all other compute nodes. (If you are using the launcher, the working directory specified on the File Management tab is the one that the program expects.)