3. Solvers

3.1. Distributed-Memory Parallel Processing Enhancements

Enhancements include:

  • Modifications to the default processor pinning strategy and communication protocols (supplied to Open MPI as command-line options) yielded competitive performance on both single machines and HPC clusters with AMD-based processors.

  • The default support for message passing interface (MPI) software has been upgraded to Intel MPI 2021.11.0 for both Linux and Windows. In prior releases, the default was Intel MPI 2021.10.0. For details and other optional supported MPI software versions, see Platforms and MPI Software in the Parallel Processing Guide

3.2. Enhanced Memory Reduction for Models with Constraint Equations and High Core Counts

The modified architecture of constraint equations significantly reduces memory for models containing a large number of constraint equations and using high core counts (32 or more cores).

3.3. Reduced Peak Memory Usage for Uncompressed Node and Element Numbering

The internal data structures used to store number mappings have been redesigned, with a particular focus on both node and element number mappings. This resulted in a significant reduction of Mechanical APDL peak memory usage when the numbering is highly uncompressed, for example when the maximum defined node or element number is many times greater than the number of defined nodes/elements.

3.4. Support for Devices for the GPU Accelerator

For Linux and Windows, the following cards have been tested and added to the list of recommended GPU devices for the GPU accelerator:

  • NVIDIA L40

  • NVIDIA RTX 6000 Ada

  • NVIDIA RTX 5000 Ada

  • NVIDIA RTX A5500

  • NVIDIA RTX A4500

For the complete list of recommended GPU accelerators, see Requirements for the GPU Accelerator in Mechanical APDL (Linux) and Requirements for the GPU Accelerator in Mechanical APDL (Windows) in the Ansys Installation Guides.

3.5. Improved Performance for the PCG Solver

The performance of the PCG solver when using one or more GPUs has been improved. Starting in this release, unsymmetric matrices are supported with the PCG solver when using one or more GPUs. Also, when using the PCG Lanczos eigensolver, additional calculations have been offloaded to the GPU for acceleration, which further improves the performance.

3.6. Support for Mixed u-P Formulation with the PCG Solver

For static analyses and transient analyses that use the full method, the PCG solver now supports the mixed u-P element formulation option for the following element types: PLANE182, PLANE183, MPC184, SOLID185, SOLID186, and SOLID187. This is the new default for the LM_KEY option of PCGOPT (LM_KEY = OFF).

3.7. Beta Feature - Frequency Domain Decomposition for Modal Analyses

Modal analyses now support the use of frequency domain decomposition (DDOPTION,FREQ) as a beta feature. For details, see Frequency Domain Decomposition for Modal Analyses in the Mechanical APDL Beta Features.