EQSLV
EQSLV, Lab
,
TOLER
, MULT
,
--
, KeepFile
Specifies the type of equation solver.
Important: Certain equation solvers have been archived and are not available on this page. For more information, see EQSLV in the Feature Archive.
-
Lab
Equation solver type:
SPARSE
—
Sparse direct equation solver. Applicable to real-value or complex-value symmetric and unsymmetric matrices. Available only for STATIC, HARMIC (full method only), TRANS (full method only), SUBSTR, and PSD spectrum analysis types (ANTYPE). Can be used for nonlinear and linear analyses, especially nonlinear analysis where indefinite matrices are frequently encountered. Well suited for contact analysis where contact status alters the mesh topology. Other typical well-suited applications are: (a) models consisting of shell/beam or shell/beam and solid elements (b) models with a multi-branch structure, such as an automobile exhaust or a turbine fan. This is an alternative to iterative solvers since it combines both speed and robustness. Generally, it requires considerably more memory (~10x) than the PCG solver to obtain optimal performance (running totally in-core). When memory is limited, the solver works partly in-core and out-of-core, which can noticeably slow down the performance of the solver. See the BCSOPTION command for more details on the various modes of operation for this solver.
This solver can be run using shared-memory parallel (SMP), distributed-memory parallel (DMP), or hybrid parallel processing. For DMP, this solver preserves all of the merits of the classic or shared memory sparse solver. The total sum of memory (summed for all processes) is usually higher than the shared memory sparse solver. System configuration also affects the performance of the distributed memory parallel solver. If enough physical memory is available, running this solver in the in-core memory mode achieves optimal performance. The ideal configuration when using the out-of-core memory mode is to use one processor per machine on multiple machines (a cluster), spreading the I/O across the hard drives of each machine, assuming that you are using a high-speed network such as Infiniband to efficiently support all communication across the multiple machines.
This solver supports use of the GPU accelerator capability.
JCG
—
Jacobi Conjugate Gradient iterative equation solver. Available only for STATIC and TRANS (full method only) analysis types (ANTYPE). Can be used for structural, thermal, and multiphysics applications. Applicable for symmetric, unsymmetric, complex, definite, and indefinite matrices. Efficient for heat transfer, electromagnetics, piezoelectrics, and acoustic field problems.
This solver can be run using shared-memory parallel (SMP), distributed-memory parallel (DMP), or hybrid parallel processing. For DMP, in addition to the limitations listed above, this solver only runs in a distributed parallel fashion for STATIC and TRANS (full method) analyses in which the stiffness is symmetric and only when not using the fast thermal option (THOPT). Otherwise, this solver disables distributed-memory parallelism at the onset of the solution, and shared-memory parallelism is used instead.
This solver supports use of the GPU accelerator capability. When using the GPU accelerator capability, in addition to the limitations listed above, this solver is available only for STATIC and TRANS (full method) analyses where the stiffness is symmetric and does not support the fast thermal option (THOPT).
PCG
—
Preconditioned Conjugate Gradient iterative equation solver (originally licensed from Computational Applications and Systems Integration, Inc.). Requires less disk file space than SPARSE and is faster for large models. Useful for plates, shells, 3D models, large 2D models, and other problems having symmetric or unsymmetric, sparse matrices. Such matrices are typically positive definite or indefinite for symmetric full harmonic analyses that solve a symmetric complex number matrix system. The PCG solver can be used for:
single-field structural analyses involving symmetric or unsymmetric matrices
single-field acoustic analyses with pressure as the degree of freedom
single-field thermal analyses involving symmetric or unsymmetric matrices
single-field structural analyses involving symmetric matrices and mixed u-P formulations (Lagrange multipliers) from element types PLANE182, PLANE183, MPC184, SOLID185, SOLID186, and SOLID187
Requires twice as much memory as JCG. Available only for analysis types (ANTYPE): STATIC, TRANS (full method only), HARMIC (full method only), or MODAL (with PCG Lanczos option only). Also available for the use pass of substructure analyses (MATRIX50). The PCG solver can robustly handle models that involve the use of constraint and/or coupling equations (CE, CEINTF, CP, CPINTF, and CERIG). With this solver, you can use the MSAVE command to obtain a considerable memory savings.
The PCG solver can handle ill-conditioned problems by using a higher level of difficulty (see PCGOPT). Ill-conditioning arises from elements with high aspect ratios, contact, and plasticity.
This solver can be run in shared-memory parallel or distributed-memory parallel mode. In DMP mode, this solver preserves all of the merits of the classic or shared-memory PCG solver. The total sum of memory (summed for all processes) is about 30% more than the shared-memory PCG solver.
This solver supports use of the GPU accelerator capability.
MIXED
—
Mixed equation solver. Combines the advantages of the sparse solver and PCG solver. It uses an approximate triangular matrix and substitutions as the preconditioner for the PCG algorithm, improving both speed and memory efficiency without compromising accuracy. This method is available for STATIC and TRANS (full method only) analysis types. It typically requires less memory than the sparse solver and offers faster performance for a range of problems. The mixed solver can be executed using shared-memory parallel (SMP), distributed-memory parallel (DMP), or hybrid parallel processing.
The mixed solver has the following limitations and does not support them:
non-structural degrees of freedom (DOF)
SMART fracture crack growth
nonlinear mesh adaptivity (NLAD)
eXtended Finite Element Method (XFEM)
arc-length method
semi-implicit method for transient analyses
topological optimization method
additive manufacturing method
The mixed solver supports GPU acceleration. Ansys recommends using the GPU acceleration with this solver as it is primed for faster solutions. For more information, see Requirements for the GPU Accelerator in Mechanical APDL and Requirements for the GPU Accelerator in Mechanical APDL.
-
TOLER
Iterative solver tolerance value. Used only with the Jacobi Conjugate Gradient, Pre-conditioned Conjugate Gradient, and Mixed equation solvers.
For the (symmetric or unsymmetric) PCG solver, the default is 1.0E-8.
When using the PCG Lanczos mode extraction method, the default solver tolerance value is 1.0E-4.
For PCG solutions involving mixed u-P formulation element types (PLANE182, PLANE183, MPC184, SOLID185, SOLID186, and SOLID187), the default tolerance is modified to 1.0E-6 when an ill-conditioned matrix is detected internally. When ill-conditioning is not detected, the default remains as 1.0E-8.
For the JCG solver with symmetric matrices, the default is 1.0E-8.
For the JCG solver with unsymmetric matrices the default is 1.0E-6.
Iterations continue until the SRSS norm of the residual is less than
TOLER
times the norm of the applied load vector. For the PCG solver in the linear static analysis case, 3 error norms are used. If one of the error norms is smaller thanTOLER
, and the SRSS norm of the residual is smaller than 1.0E-2, convergence is assumed to have been reached. See Iterative Solver in the Mechanical APDL Theory Reference for details.Note: When used with the Pre-conditioned Conjugate Gradient equation solver,
TOLER
can be modified between load steps (this is typically useful for nonlinear analysis).-
MULT
Multiplier (defaults to 2.5 for nonlinear analyses; 1.0 for linear analyses) used to control the maximum number of iterations performed during convergence calculations. Used only with the Pre-conditioned Conjugate Gradient equation solver (PCG). The maximum number of iterations is equal to the multiplier (
MULT
) times the number of degrees of freedom (DOF). IfMULT
is input as a negative value, then the maximum number of iterations is equal to abs(MULT
). Iterations continue until either the maximum number of iterations or solution convergence has been reached. In general, the default value forMULT
is adequate for reaching convergence. However, for ill-conditioned matrices (that is, models containing elements with high aspect ratios or material type discontinuities) the multiplier may be used to increase the maximum number of iterations used to achieve convergence. The recommended range for the multiplier is 1.0MULT
3.0. Normally, a value greater than 3.0 adds no further benefit toward convergence, and merely increases time requirements. If the solution does not converge with 1.0
MULT
3.0, or in less than 10,000 iterations, then convergence is highly unlikely and further examination of the model is recommended. Rather than increasing the default value of
MULT
, consider increasing the level of difficulty (Lev_Diff
) on the PCGOPT command.-
--
Unused field.
-
KeepFile
Determines whether files from a SPARSE solver run should be deleted or retained. Applies only to
Lab
= SPARSE for static and full transient analyses.
Command Default
The sparse direct solver is the default solver for all analyses, with the exception of modal/buckling analyses.
For modal/buckling analyses, there is no default solver. You must specify a solver with the MODOPT or BUCOPT command. The specified solver automatically chooses the required internal equation solver (for example, MODOPT,LANPCG automatically uses EQSLV,PCG internally, and BUCOPT,LANB automatically uses EQSLV,SPARSE internally).
Notes
The selection of a solver can affect the speed and accuracy of a solution. For a more detailed discussion of the merits of each solver, see Solution in the Basic Analysis Guide .
This command is also valid in PREP7.
Distributed-Memory Parallel (DMP) Restriction — All equation solvers are supported in a DMP analysis. However, the SPARSE, PCG, and MIXED solvers are the only distributed solvers that always run a fully distributed solution. The JCG solver runs in a fully distributed mode in some cases; in other cases, it does not.