5.1. Selecting a Solver

Several methods for solving the system of simultaneous equations are available in the program: sparse direct solution, Preconditioned Conjugate Gradient (PCG) solution, Jacobi Conjugate Gradient (JCG) solution, Incomplete Cholesky Conjugate Gradient (ICCG) solution, and Quasi-Minimal Residual (QMR) solution. In addition, distributed versions of the sparse, PCG, and JCG solvers are available (refer to the Parallel Processing Guide).

Direct solvers are based on direct elimination of equations or decomposition of matrices. As an advantage, they have a high convergence rate. As a disadvantage, they involve a computational cost and increased memory requirements.

Iterative solvers begin with an initial guess to the solution and continue iteratively refining until it is within an acceptable tolerance of the exact solution. Their advantages typically include faster performance and lower memory requirements than direct solvers, while a disadvantage is the solution may not converge.

To select a solver, issue the EQSLV command. See the command description for details about each solver.

The following tables provide general guidelines that you may find useful when selecting an appropriate solver for a given problem. The first table lists solvers that can be run using shared memory parallelism on shared memory parallel hardware. The second table lists solvers that can be run using distributed memory parallelism on shared memory parallel hardware (for example, a desktop or workstation) or distributed memory parallel hardware (for example, a cluster). MDOF indicates million degrees of freedom.

Table 5.1: Shared Memory Solver Selection Guidelines

SolverTypical ApplicationsMemory UseDisk (I/O) UseAdvantagesDisadvantages 
Sparse Direct Solver (direct elimination)When robustness and solution speed are required (nonlinear analysis); for linear analysis where iterative solvers are slow to converge (especially for ill-conditioned matrices, such as poorly shaped elements).

Out-of-core: 1 GB/MDOF

In-core: 10 GB/MDOF

Out-of-core: 10 GB/MDOF

In-core: 1 GB/MDOF

High convergence rateComputational costly and increased memory requirements 
PCG Solver (iterative solver) Reduces disk I/O requirement relative to sparse solver. Best for large models with solid elements and fine meshes. Most robust iterative solver in Mechanical APDL. 0.3 GB/MDOF w/MSAVE,ON; 1 GB/MDOF without MSAVE0.5 GB/MDOF The most robust iterative solver in this program and typically has minimal memory requirementsMay not converge
JCG Solver (iterative solver) Best for single field problems - (thermal, magnetics, acoustics, and multiphysics). Uses a fast but simple preconditioner with minimal memory requirement. Not as robust as PCG solver.0.5 GB/MDOF0.5 GB/MDOFTypically has faster performance and minimal memory requirementsNot as efficient as the PCG solver and may not support distributed parallel processing 
ICCG Solver (iterative solver) More sophisticated preconditioner than JCG. Best for more difficult problems where JCG fails, such as unsymmetric thermal analyses. 1.5 GB/MDOF0.5 GB/MDOFTypically has faster performance and minimal memory requirementsNot as efficient as the PCG solver and does not support distributed parallel processing 
QMR Solver (iterative solver)Used for full harmonic analyses. This solver is appropriate for symmetric, complex, definite, and indefinite matrices. The QMR solver only supports 1 core.1.5 GB/MDOF0.5 GB/MDOFTypically has faster performance and minimal memory requirementsNot as efficient as the PCG solver and does not support distributed parallel processing 

Table 5.2: Distributed Memory Solver Selection Guidelines

SolverTypical ApplicationsIdeal Model SizeMemory UseDisk (I/O) Use
Distributed Memory Sparse Direct Solver Same as sparse solver but can also be run on distributed memory parallel hardware systems. 500,000 DOF to 10 MDOF (works well outside this range)

Out-of-core: 1.5 GB/MDOF on head compute node, 1.0 GB/MDOF on other compute nodes

In-core: 15 GB/MDOF on head compute node, 10 GB/MDOF on other compute nodes

Out-of-core: 10 GB/MDOF

In-core: 1 GB/MDOF

Distributed Memory PCG Solver Same as PCG solver but can also be run on distributed memory parallel hardware systems.1 MDOF to 100 MDOF1.5-2.0 GB/MDOF in total*0.5 GB/MDOF
Distributed Memory JCG Solver Same as JCG solver but can also be run on distributed memory parallel hardware systems. Not as robust as the distributed memory PCG or shared memory PCG solver.1 MDOF to 100 MDOF0.5 GB/MDOF in total*0.5 GB/MDOF

* In total means the sum of all processors.

To use more than four processors, the shared memory and distributed memory solvers require HPC licenses. For information, see HPC Licensing in the Parallel Processing Guide.