To run LS-DYNA in parallel on a single machine, no additional setup is required.
To run an analysis with LS-DYNA in parallel on a cluster, some configuration is required as described in the following sections:
If you are running on a single machine, there are no additional requirements for running a distributed solution.
If you are running across multiple machines (for example, a cluster), your system must meet these additional requirements to run a distributed solution.
Homogeneous network: All machines in the cluster must be the same type, OS level, chip set, and interconnects.
You must be able to remotely log in to all machines, and all machines in the cluster must have identical directory structures (including the Ansys 2025 R1 installation, MPI installation, and working directories). Do not change or rename directories after you've launched LS-DYNA.
All machines in the cluster must have Ansys 2025 R1 installed, or must have an NFS mount to the Ansys 2025 R1 installation. If not installed on a shared file system, Ansys 2025 R1 must be installed in the same directory path on all systems.
All machines must have the same version of MPI software installed and running. Table 3.2: Platforms and MPI Software shows the MPI software and version level supported for each platform.
The MPI software supported by LS-DYNA in parallel depends on the platform (see Table 3.2: Platforms and MPI Software).
The files needed to run LS-DYNA in parallel using Intel MPI are included on the installation media and are installed automatically when you install Ansys 2025 R1. Therefore, when running on a single machine (for example, a laptop, a workstation, or a single compute node of a cluster) on Windows or Linux, or when running on a Linux cluster, no additional software is needed. However, when running on multiple Windows machines you must use a cluster setup, and you must install the MPI software separately as described later in this section.
Install Ansys 2025 R1 following the instructions in the Ansys, Inc. Installation Guides for your platform. Be sure to complete the installation, including all required post-installation procedures.
To run LS-DYNA in parallel on a cluster, you must:
Install Ansys 2025 R1 on all machines in the cluster, in the exact same location on each machine.
For Windows, you can use shared drives and symbolic links. Install Ansys 2025 R1 on one Windows machine (for example, C:\Program Files\ANSYS Inc\V251) and then share that installation folder. On the other machines in the cluster, create a symbolic link (at C:\Program Files\ANSYS Inc\V251) that points to the UNC path for the shared folder. On Windows systems, you must use the Universal Naming Convention (UNC) for all file and path names for LS-DYNA in parallel to work correctly.
For Linux, you can use the exported NFS file systems. Install Ansys 2025 R1 on one Linux machine (for example, at /ansys_inc/v251), and then export this directory. On the other machines in the cluster, create an NFS mount from the first machine to the same local directory (/ansys_inc/v251).
Installing MPI software on Windows
You can install Intel MPI from the installation launcher by choosing Intel-MPI 2021.8.0 Installation Instructions in the Ansys, Inc. Installation Guides.
. For installation instructions seeMicrosoft HPC Pack (Windows HPC Server 2016)
You must complete certain post-installation steps before running LS-DYNA in parallel on a Microsoft HPC Server 2016 system. The post-installation instructions provided below assume that Microsoft HPC Server 2016 and Microsoft HPC Pack (which includes MS MPI) are already installed on your system. The post-installation instructions can be found in the following README files:
Program Files\ANSYS Inc\V251\tp\MPI\WindowsHPC\README.mht
or
Program Files\ANSYS Inc\V251\tp\MPI\WindowsHPC\README.docx
The user must be a registred user on the HPC cluster.
"Client utilities" from Microsoft HPC Pack must be installed on the computer which submits the job. Use the same version of HPC Pack as is used on the HPC cluster.
Store the credentials for submitting to the cluster by running this command:
hpccred setcreds /user:
MYDOMAIN\myusername
/scheduler:myhpcserver
in a command prompt after substituting MYDOMAIN, myusername and myhpcserver.
The input and solver files have to be accessible on the Compute Nodes on the cluster. This typically means that the input and solver files should be placed on a disk share, specified in LS-Run with their UNC paths; in other words starting with \\FILESERVER\.