2.3. Running a Distributed Solution

For large models, you can use the Shared Memory Parallel processing (SMP) or the Massively Parallel Processing (MPP) capabilities of LS-DYNA to shorten the elapsed time necessary to run an analysis. To use either of these features, you must purchase the appropriate number of Ansys LS-DYNA HPC licenses. The HPC license incorporates both SMP and MPP capabilities. Please contact your Ansys sales representative for more information on purchasing the appropriate licenses.

2.3.1. Shared Memory Parallel Processing

The shared memory parallel processing capabilities allow you to distribute model-solving power over multiple processors on the same machine. To use this feature you must have a machine with at least many cores in the computer as the number of LS-DYNA processes, and you must have an Ansys LS-DYNA HPC license for each process beyond the first one.

When you are using shared memory parallel processing, the calculations may be executed in different order, depending on CPU availability and the workload on each CPU. Because of this, you may see slight differences in the results when you run the same job multiple times. To avoid these differences, you can specify the number of CPUs as a negative number to maintain consistency. Maintaining consistency can result in an increase of up to 15% in CPU time.

2.3.2. Massively Parallel Processing

The massively parallel processing (MPP) capabilities of LS-DYNA allow you to run the LS-DYNA solver over a cluster of machines or use multiple processors on a single machine. To use the LS-DYNA MPP feature, you must have an Ansys LS-DYNA HPC license for each processor beyond the first one.

Before running an analysis using LS-DYNA MPP, you must have supported MPI software correctly installed, and the machines running LS-DYNA MPP must be properly configured.

Table 2.1: LS-DYNA MPP MPI Support on Windows and Linux

MPI version for DYNA MPP 64-bit Windows64-bit Linux
Intel MPIXX
MS MPIXn/a

2.3.3. Configuring LS-DYNA in Parallel

To run LS-DYNA in parallel on a single machine, no additional setup is required.

To run an analysis with LS-DYNA in parallel on a cluster, some configuration is required as described in the following sections:

2.3.3.1. Prerequisites for Running LS-DYNA in Parallel

If you are running on a single machine, there are no additional requirements for running a distributed solution.

If you are running across multiple machines (for example, a cluster), your system must meet these additional requirements to run a distributed solution.

  • Homogeneous network: All machines in the cluster must be the same type, OS level, chip set, and interconnects.

  • You must be able to remotely log in to all machines, and all machines in the cluster must have identical directory structures (including the Ansys 2024 R2 installation, MPI installation, and working directories). Do not change or rename directories after you've launched LS-DYNA.

  • All machines in the cluster must have Ansys 2024 R2 installed, or must have an NFS mount to the Ansys 2024 R2 installation. If not installed on a shared file system, Ansys 2024 R2 must be installed in the same directory path on all systems.

  • All machines must have the same version of MPI software installed and running. Table 2.2: Platforms and MPI Software shows the MPI software and version level supported for each platform.

2.3.3.2. MPI Software

The MPI software supported by LS-DYNA in parallel depends on the platform (see Table 2.2: Platforms and MPI Software).

The files needed to run LS-DYNA in parallel using Intel MPI are included on the installation media and are installed automatically when you install Ansys 2024 R2. Therefore, when running on a single machine (for example, a laptop, a workstation, or a single compute node of a cluster) on Windows or Linux, or when running on a Linux cluster, no additional software is needed. However, when running on multiple Windows machines you must use a cluster setup, and you must install the MPI software separately as described later in this section.

Table 2.2: Platforms and MPI Software

PlatformMPI Software
Linux

Intel MPI 2018.3.222

Windows 10 (single machine)

Intel MPI 2018.3.210[a]
Windows Server 2016 (cluster)Microsoft HPC Pack (MS MPI v10.1.12)

[a] MS MPI is an alternative to Intel MPI for a single machine on Windows.


2.3.3.3. Installing the Software

Install Ansys 2024 R2 following the instructions in the Ansys, Inc. Installation Guides for your platform. Be sure to complete the installation, including all required post-installation procedures.

To run LS-DYNA in parallel on a cluster, you must:

  • Install Ansys 2024 R2 on all machines in the cluster, in the exact same location on each machine.

  • For Windows, you can use shared drives and symbolic links. Install Ansys 2024 R2 on one Windows machine (for example, C:\Program Files\ANSYS Inc\V242) and then share that installation folder. On the other machines in the cluster, create a symbolic link (at C:\Program Files\ANSYS Inc\V242) that points to the UNC path for the shared folder. On Windows systems, you must use the Universal Naming Convention (UNC) for all file and path names for LS-DYNA in parallel to work correctly.

  • For Linux, you can use the exported NFS file systems. Install Ansys 2024 R2 on one Linux machine (for example, at /ansys_inc/v242), and then export this directory. On the other machines in the cluster, create an NFS mount from the first machine to the same local directory (/ansys_inc/v242).

Installing MPI software on Windows

You can install Intel MPI from the installation launcher by choosing Install MPI for Ansys Parallel Processing. For installation instructions see Intel-MPI 2021.8.0 Installation Instructions in the Ansys, Inc. Installation Guides.

Microsoft HPC Pack (Windows HPC Server 2016)

You must complete certain post-installation steps before running LS-DYNA in parallel on a Microsoft HPC Server 2016 system. The post-installation instructions provided below assume that Microsoft HPC Server 2016 and Microsoft HPC Pack (which includes MS MPI) are already installed on your system. The post-installation instructions can be found in the following README files:

Program Files\ANSYS Inc\V242\commonfiles\MPI\WindowsHPC\README.mht

or

Program Files\ANSYS Inc\V242\commonfiles\MPI\WindowsHPC\README.docx

The user must be a registred user on the HPC cluster.

"Client utilities" from Microsoft HPC Pack must be installed on the computer which submits the job. Use the same version of HPC Pack as is used on the HPC cluster.

Store the credentials for submitting to the cluster by running this command:

hpccred setcreds /user:MYDOMAIN\myusername /scheduler:myhpcserver

in a command prompt after substituting MYDOMAIN, myusername and myhpcserver.

The input and solver files have to be accessible on the Compute Nodes on the cluster. This typically means that the input and solver files should be placed on a disk share, specified with their UNC paths; in other words starting with \\FILESERVER\.