This section contains information about feature enhancements that can affect program behavior or analysis results in ways that you may not expect. Also covered are known incompatibilities, notable issues and defects that have been resolved, and information about replacement capabilities for features that have been removed.
The following topics offer supplemental 2026 R1 product-update information presented by the Mechanical APDL development and testing teams:
If you are upgrading across several releases, you may find it helpful to consult the Update Guide sections of the archived release notes for Mechanical APDL.
For information about past, present, and future operating system support, see the Platform Support section of the Ansys Website.
Mechanical APDL Release 2026 R1 can read database files from all prior Mechanical APDL releases. Due to ongoing product improvements and defect resolutions, however, results obtained from old databases running in new releases may differ somewhat from those obtained previously.
The following Mechanical APDL feature updates in Release 2026 R1 are known to produce program behaviors or analysis results that differ from those of the prior release:
- 10.2.1. Change in .ldhi Files
- 10.2.2. Changes in Automatic Hybrid Parallel
- 10.2.3. Change in Internal Multipoint Constraint (MPC) Reporting in .ldhi and .cnd Files
- 10.2.4. Moment Convergence Check at Pilot Node of Target Elements with Rotational DOFs
- 10.2.5. Harmonic Balance Method Uses Sparse Solver by Default
- 10.2.6. Output Displays Elapsed Time
- 10.2.7. Change in Format of Nonlinear Solution Monitor File (.mntr)
- 10.2.8. Restart Mechanism Improves Conjugate Gradient (CG) Solver Robustness
- 10.2.9. Change in Preconditioned Conjugate Gradient (PCG) Lanczos Eigensolver
- 10.2.10. Output behavior of mapdl.run in a *PYTHON block
- 10.2.11. Change in Impedance Sheet Formulation for Acoustic Analysis
- 10.2.12. Change in Default for Ansys Product Improvement Program (APIP)
- 10.2.13. Change in Behavior Due to Backstress Scaling in the Chaboche Model for Non-Isothermal Loading
The Mechanical APDL application no longer writes duplicated tabular load data across load steps to *.ldhi files when the data remains unchanged. This reduces file size and improves efficiency. During restarts, the application reads the necessary data from the *.rdb file, ensuring continuity without requiring repeated table entries. In previous releases, duplicated tables were written to *.ldhi files in every load step.
The automatic hybrid parallel feature, also known as auto-hybrid, now prevents hardware oversubscription on large clusters that previously degraded performance. The following behavior changes are included:
Single-node runs at low core counts
Auto-hybrid now applies threads only to large domains when reducing core
counts (for example, -np 12 becomes -np 4 with
8 threads). This change may alter the solution printout compared to previous
releases. Previously, threads were distributed evenly across all domains,
regardless of size.
Multi-node runs
Auto-hybrid activates only when all compute nodes have the same number of CPU cores (homogeneous configuration). Therefore, the number of domains and threads determined by auto-hybrid parallel heuristics may differ from previous releases. This change helps prevent hardware oversubscription and associated performance decline.
Oversubscription protection
A final configuration check now runs before execution. If oversubscription is
detected, the program issues a fatal error and stops the run. To proceed, rerun
the solution with the -nt 1 command-line argument, which
disables the auto-hybrid feature. Previously, no such check existed, and
oversubscription could lead to poor performance or instability.
Restart and downstream analysis behavior
When restarting or continuing an analysis, auto-hybrid restores the previous configuration of processes and threads. If a downstream analysis launches with more processes than the upstream analysis, auto-hybrid reduces the number of active processes to match the upstream configuration.
For example:
Upstream modal analysis runs with
-np 4.Downstream harmonic analysis launches with
-np 8.
In previous releases, the harmonic analysis would reduce to -np 2 -nt
2. In Release 2026 R1, the harmonic analysis reduces to -np
2 and does not activate additional threads (-nt
1).
Recommendation: Launch downstream analyses with the same or fewer processes than the upstream analysis to avoid unexpected reductions.
MPC (KEYOPT(2) = 2) bonded (KEYOPT(12) =5 or 6) contact excluding initial geometrical penetration or gap and offset (KEYOPT(9) = 1) no longer reports PENE, GAP, and SLIDE in the element nodal solution, *.ldhi and *.cnd files.
Moment convergence is now checked at the pilot node of TARGE170 when target elements have rotational degrees of freedom (DOFs), improving accuracy. Previously, this check at the pilot nodes was ignored if there were no structural elements with rotational DOFs in the model.
The sparse solver is now used by default in harmonic balance method (HBM) analyses. This change is expected to improve overall performance, particularly for systems with many degrees of freedom. While the solver may introduce slight numerical differences in results, these are generally negligible and should not affect the accuracy or reliability of the analysis.
Output messages now report the elapsed time (wall clock time) since the start of the program, instead of CPU time. This change provides a more accurate and relevant measure of total runtime. It accounts for all delays, including those outside CPU execution, and avoids confusion caused by thread-level CPU time aggregation.
Example output:
***** ROUTINE COMPLETED ***** ELAPSED TIME = 3.461 *** NOTE *** ELAPSED TIME = 12.982 TIME= 08:00:00 *** WARNING *** ELAPSED TIME = 11.432 TIME= 09:00:00 *** WARNING *** ELAPSED TIME = 65.347 TIME= 10:00:00
In previous releases, the output used the label CP to indicate CPU time. It measured only the time the processor spent executing instructions, excluding time spent on I/O operations, network delays, GPU acceleration, and other non-CPU activities. Additionally, in shared memory parallel (SMP) mode, CPU time was summed across all threads and cores. This could result in misleadingly high values, making it appear that certain simulation steps took longer than they did.
The nonlinear solution monitor file (*.mntr) now includes
an additional field between variable fields 1 (VARIAB
1) and 2 (VARIAB 2). This new field
reports current memory usage (in GB) at each converged substep, enabling better
tracking of memory consumption during nonlinear analysis.
In previous releases, the *.mntr file contained variable fields 1 through 4. With this change, fields 2 through 4 have shifted to become variable fields 3 through 5.
For more information, see MONITOR.
This release introduces a restart mechanism for all CG drivers to improve solver reliability. The enhancement may improve solutions for models that previously converged slowly (below 1.0E-6) or failed to converge. Some test cases may show a slight increase in iteration counts due to the additional cycle.
When the restart mechanism is triggered, the solver performs an extra cycle of iterations. This second cycle starts from the most accurate solution found in the first cycle. Both cycles use the same maximum number of iterations.
Additional behavior changes includes:
For linear systems with low condition number estimates and relaxed solver tolerances, the CG algorithm locally reduces the maximum number of iterations per cycle.
During the first cycle, if the solution diverges or approaches within three orders of magnitude of the requested tolerance, the solver aborts the cycle and starts a new one.
In previous releases, when the solver reached the maximum number of iterations without meeting the tolerance, it would error out without completing the solution.
For more information, see Conjugate Gradient (CG) Solvers in the Basic Analysis Guide.
The PCG Lanczos eigensolver now uses a single stopping criterion for its internal conjugate gradient (CG) solvers. This criterion is based on the exact norm of the error and is expected to improve result accuracy. Some test cases might show a slight increase in the number of iterations.
In previous releases, internal CG solvers used multiple stopping criteria and stopped iterating when any one was met. Some of these criteria relied on approximate error norms.
for more information, see Conjugate Gradient (CG) Solvers in the Theory Reference.
Within a *PYTHON block, issuing
mapdl.run(...), no longer displays output by default. Instead, it
returns a string that can be printed by issuing
print(mapdl.run(...)). To restore the previous behavior of
automatically printing all output, set the environment variable
MAPDL_PYTHON_WRITE_OUTPUT.
The updated impedance sheet formulation provides physically consistent
behavior for low, matched, and high impedance values. When the sheet impedance
is much lower than the characteristic impedance of the fluid (), the surface behaves as a soft or limp sheet, allowing
pressure release. When the sheet impedance equals the characteristic impedance
of the fluid (
), the sheet behaves as an almost perfectly transparent
interface, producing negligible reflection and allowing the acoustic field to
propagate as if no internal surface exists. When the sheet impedance is much
larger than the fluid characteristic impedance (
), the sheet behaves as a rigid interface, strongly restricting
normal velocity and introducing significant reflection. For details, see Impedance Sheet Approximation in the Theory Reference and Impedance Sheet in the Acoustic Analysis Guide.
The Ansys Product Improvement Program (APIP) now automatically enrolls participants. In previous releases, enrollment was off by default and required manual opt-in. You can opt out at any time by following the steps described here: The Ansys Product Improvement Program in the Operations Guide.
For non-isothermal loading, the backstress is now scaled differently. It uses only material parameter C in the nonlinear kinematic hardening model, resulting in changes to stress and strain behavior. In previous releases, non-physical results were observed, such as negative values for plastic work per volume.
The following incompatibilities with prior releases are known to exist at Release 2026 R1:
The force-distributed surface constraint cannot be applied to a single surface node. Instead, use a rigid surface constraint.
For the exponential pressure-penetration relationship (KEYOPT(6) = 3 on contact elements), calculation of the default value of real constant PZER has been revised to make it more robust and accurate.
The following issues are known to exist at Release 2026 R1:
Linear Perturbation Analysis Limitation – The total tangent stiffness matrix obtained from a Linear Perturbation basis analysis may include stress stiffness and load stiffness contributions (geometric stiffness). If the geometric stiffness contribution is significant, then the total tangent stiffness matrix, when used in a Linear Perturbation downstream analysis, may produce reaction forces not satisfying the equilibrium with the external forces.
Importing ACIS and CATIA Files – Check models carefully after importing from ACIS and CATIA as the third party libraries supporting these commands have changed.
GPU Acceleration Hangs on AMD Cards with Small Models – When running certain small models with the GPU accelerator enabled on AMD cards, specifically using three or more processes, the program may run very slowly and appear to hang.
Workaround: To avoid this issue, create the following environment variable:
GPU_MAX_HW_QUEUES=2
On Windows, CRTL+C May Not Terminate All Related Child Processes – Using CRTL+C to terminate a distributed-memory parallel processing (
-dis) run on Windows may leave child processes running.Workaround: To guarantee a complete shutdown, use the /EXIT command. If you must use CRTL+C, you may need to manually remove remaining child processes using Windows task manager.
Downstream Analysis Using Frequency Domain Decomposition May Crash After Hybrid Parallel Use – A downstream analysis that uses frequency domain decomposition may crash if the upstream analysis was run with hybrid parallel enabled. For example, if the upstream analysis runs with
-np 4 -nt 2and the downstream analysis runs with-np 8, the downstream analysis may crash when the hybrid parallel configuration is restored.Workaround: To avoid the downstream crash, use one of the following methods:
Match the upstream parallel configuration: Launch the downstream analysis with the same number of processes (
-np) and threads (-nt) used in the upstream analysis. Use this method when the upstream analysis explicitly used hybrid parallel.Disable hybrid parallel: Launch the downstream analysis with
-nt 1to disable hybrid parallel. Use this method when the upstream analysis omitted -nt and automatic hybrid parallel was applied.
Possible Stress Calculation Failures in Nonlinear Adaptivity Analyses with TNM and Bergstrom-Boyce (BB) Material Models – Certain shear-dominant high deformation scenarios with the TNM or BB material models in a nonlinear adaptivity analysis can cause stress calculation failures after remeshing.
For issues discovered following publication of this document, see Mechanical APDL in the Ansys Known Issues and Limitations.
The following notable issues and defects for Mechanical APDL have been resolved at Release 2026 R1:
| ID | Resolution Description |
|---|---|
| 1167416 | The automatic hybrid parallel feature no longer oversubscribes hardware on large clusters starting in Service Pack 2025 R2 SP02. This resolves the performance decline observed in the previous release. For more information, see Changes in Automatic Hybrid Parallel. |
| Automatic initialization in curve fitting for hyperelastic models with Prony series viscoelasticity no longer requires prior installation of the latest Anysys Material Lab (AML) Python module. | |
| 1188443 | The issue that caused crashes on SUSE Linux Enterprise Server 15 machines has been resolved. |
| 1312569 | The performance decline that occurred when running AMD processors on Linux in Release 2025 R2 has been fixed. |
| 1370069 | The issue that caused crashes in DMP mode with Intel MPI and NVIDIA GPUs on Linux when running more than 50 number of processes has been fixed. |
If a legacy Mechanical APDL feature has been removed at Release 2026 R1, this section provides information about the Mechanical APDL replacement feature, its functional equivalent in another Ansys product, or another workaround.
Legacy features that have been archived also appear here. While archived features remain available for use, technical enhancement is unlikely to occur, and better alternatives are available and recommended in most cases.
For the contact elements, CONTA172, CONTA174, CONTA175, CONTA177, the following has been undocumented:
The real constant ICONT (initial contact closure)
KEYOPT(5)=4 (auto ICONT)
Instead, use the real constant CNOF and CNCHECK, ADJUST.
The following connection commands used to import early versions of CATIA (V4 and earlier) or ACIS files into Mechanical APDL have been archived and are considered obsolete. Instead, use one of the other products like the Mechanical application or Discovery or use one of the CAD/CAE packages supported in Mechanical APDL (see Introduction to the Connection Functionality in the Connection User's Guide).