Known Issues and Limitations
Configuration and Access
Only one AWS account can be associated with Ansys Gateway powered by AWS.
If an AWS account has a non-default region enabled, Express setup will fail. To see which regions are enabled by default, go to the Regions and endpoints table in the AWS documentation and review the 'Active by default' column. (871255)
When a domain is associated with a tenant that was set up using the Manual setup option (Tenant1), and the same domain is associated as a secondary domain with another tenant that was set up using the Express setup option (Tenant2), any user who is not part of the Active Directory of Tenant1 will not be able to sign in to either tenant. In this scenario, when such a user attempts to sign in, an "Active Directory Membership Required" error is displayed. Even though the user is eligible to sign in to Tenant2, the tenant selector is not displayed due to the Active Directory issue on Tenant1, preventing the user from accessing either of the tenants. (934167)
If Ansys Gateway powered by AWS was set up using the Manual setup option, and an administrator attempts to add a user locally on the page using an email address that is associated with a User Principal Name or Mail field in Active Directory, a 'User successfully added" message is displayed, but the user is not actually added to the Users list. Adding a user locally using an email address that already exists in Active Directory is not supported. (1211724)
Operating System Images
In project spaces that support 2024 R2 and newer releases, the Convert to OS image action is not available. (1183581)
OS images created in 24R1- project spaces using Convert to OS image cannot be used to create virtual desktops in 24R2+ project spaces. (1196163)
Virtual Desktops
It is not possible to change the hardware used for a virtual desktop after the virtual desktop has been created. (1133246)
When creating a Windows virtual desktop with applications selected for installation, and a Windows update is triggered on the newly created virtual machine, application installation may fail. (975861)
To avoid this issue, Ansys recommends that you create the virtual desktop without any applications, and then add applications after the virtual desktop has been created. See Adding Applications to a Virtual Machine in the User's Guide.
Tesla drivers do not support graphics visualization. If accelerated graphics are desired, use a GRID driver.
On a Linux virtual desktop, use GNOME Desktop with the GRID driver, as KDE does not support accelerated graphics. (1198261)
On Linux virtual desktops with a CentOS operating system, the Stop virtual desktop when no user has been active for timer does not work. The virtual desktop remains in the Running state even if no user has been active for the duration of the timer. (1222506)
When Ansys Gateway powered by AWS has been set up using the Express setup option, the hostname displayed in a virtual machine's details may not be the actual hostname of the machine in the AWS domain. This is important if you are referencing virtual desktops in your cluster configurations or trying to connect to a virtual machine from your local machine.
A hostname in the AWS domain begins with 'ip' and has a format similar to the following:
In some scenarios, the displayed hostname starts with 'ans'. This is not the actual hostname.
Be aware of the following:
For Linux virtual desktops, the actual hostname is not displayed in the machine's settings when the virtual machine is being created (the resource is requested, starting, or waiting for services). Once the machine goes into the Running state and the Connect button is displayed, the actual hostname becomes available in the resource details and remains displayed there from that point on.
When referencing Linux virtual desktops in your cluster configurations, Ansys recommends using the Private IP address instead of the Hostname value to avoid error. If you do use the Hostname value, make sure that it is the actual hostname.
For Windows virtual desktops, the actual hostname is not displayed in the machine's settings at any time. When referencing Windows virtual desktops in your configurations, you must use the Private IP address instead of the hostname.
For Linux NFS file storage servers, the actual hostname is displayed in the same way that it is displayed for Linux virtual desktops. You can use hostnames when referencing file storage servers.
For Windows SMB file storage servers, the actual hostname is not displayed. You must use IP addresses when referencing Windows file storage servers.
For cluster nodes, the actual hostname is not displayed in the Node Details.
When referencing a cluster node, use the Private IP address instead.
When connecting to a Linux virtual machine from your local machine (using SSH), use the Public IP address (recommended) or FQDN value that is displayed when you click Connect in the resource tile. Using the private IP address or hostname will not work.
When connecting to a Linux virtual machine using an SSH private key via PowerShell or the Windows command prompt, the connection may fail with an 'invalid format' error if the version of OpenSSH being used is greater than version 9.0. (1163234)
The OpenSSH version can be determined using ssh -V.
To work around this issue, Ansys recommends using an alternate SSH client such as MobaXterm.
Windows virtual machines should have a minimum of 2 vCPUs and 8 GB RAM. To ensure quick startup and efficient operation, Ansys recommends a minimum of 4 vCPUs and 16 GB RAM. (870406)
When a virtual desktop is being created, applications may get stuck in the 'Installing' state if multiple applications were selected during setup. If this occurs (despite a sufficient disk size), Ansys recommends installing one application at a time. Delete the virtual machine and create a new one with only one application selected for installation. You can then add more applications to the virtual machine after it has been created. See Adding Applications to a Virtual Machine in the User's Guide. (910015)
Connecting to a virtual machine from your local machine via SSH may not work if you are connected to a company VPN, as your company's security policies may block outbound SSH communication. In this case, disconnect from the VPN before trying to SSH to the virtual machine.
When a virtual machine is created, initial configuration could sometimes trigger system reboots and the machine could be in the "Installing" state for a while before a user can connect to the machine. (870406)
Occasionally, Linux virtual desktops may lag with no apparent cause. If disconnecting from and reconnecting to the virtual desktop does not solve the issue, try restarting the virtual desktop as follows:
Disconnect from the virtual desktop session.
In Ansys Gateway powered by AWS, click Stop in the virtual desktop tile.
When the virtual desktop is in the Stopped state, wait a few minutes to allow all background processes to stop.
Click Start to restart the virtual desktop.
Reconnect to the virtual desktop.
(807886)
In some cases, virtual desktop creation may fail if there are not enough available cores in the selected AWS availability zone. When selecting a hardware type in the wizard, the Available cores value that is reported is the sum of available cores across all availability zones in the AWS region. For example, the Europe-Central region has various availability zones such as Europe-Central-1a, Europe-Central-1b, and so on. Even though you select a specific availability zone when setting up a virtual desktop, the Available cores value does not report the available cores in that specific zone. The Ansys Gateway powered by AWS development team is working on a fix but there are API limitations on the AWS side. To work around this issue, try selecting a different availability zone.
Sometimes, when you create a Linux virtual machine from an OS image, the virtual machine may get stuck in the 'Installing' state. Most likely, there is an issue with the OS image. To resolve this issue, delete the virtual machine and create a new one without using the faulty OS image. To prevent future occurrences of this issue, Ansys recommends that you delete the OS image. (727238)
When a virtual desktop is in the Stopped state, the Add application action is enabled on the Applications tab even though applications cannot be added to a virtual desktop in this state. The action enables you to select an application to install, but the application is not installed. (1047876)
Autoscaling Clusters
When Ansys Gateway powered by AWS is configured to allow public IP addresses to be issued for virtual machines, autoscaling cluster creation will fail if any defined queues specify instance types with multiple network interfaces. Public IP addresses can only be assigned to virtual machines with a single network interface. (1232283)
To work around this issue, you have two options:
Disable the generation of public IP addresses. See Allowing or Preventing the Generation of Public IP Addresses for Virtual Machines in the Administration Guide.
If keeping the generation of public IP addresses enabled, choose only instance types with single network interfaces when defining queues. For a list of instance types to avoid, see the Network cards topic in the AWS documentation.
When adding Ansys HPC Platform Services to an existing autoscaling cluster, you should not select the Create new storage option in the Storage where cluster applications are installed drop-down. If you select this option, a storage will not actually be created, and selecting this storage or a storage that is not mounted to the cluster will result in undesirable behaviour. (1222448)
You must select a storage that is mounted to the cluster and that contains the simulation application(s) to be used with Ansys HPC Platform Services. Storages that are mounted to the cluster are listed in the Mounted storages section of the cluster details page.
HPC Clusters
When Ansys Gateway powered by AWS is configured to allow public IP addresses to be issued for virtual machines, connecting to an HPC cluster from your local machine using a downloaded SSH private key does not work. (12223846)
To work around this issue, connect to the HPC cluster from a virtual desktop in the same project space.
When you create a cluster for the first time, any shared storage drives that are assigned to the project space at that time are unexpectedly mounted on the cluster nodes. An attempt will also be made to mount the drives on nodes of any subsequent clusters that you create. To avoid this issue, make sure that the first cluster you create does not have a mounted storage drive, as this impacts the image that is created and used for cluster creation. (714148)
When a file storage server has multiple NFS shared folders, only the first folder gets mounted on a cluster. Mounting multiple NFS shared folders on a cluster is not supported. (884588)
When creating an HPC cluster for which an OS image already exists (that is, a cluster with the same application package was created previously), the application installed will use the license server information specified in the image file, regardless of what license server information you specify in the cluster's application settings. If license server information was not specified when the image was originally created, or you want to specify a different license server for the cluster that you are creating, you must first delete the existing image. See Deleting an OS Image in the Administration Guide.
When creating a cluster from an existing image, and there are not enough nodes available to meet the number of nodes requested, Ansys Gateway powered by AWS will attempt to create the cluster with the reduced number of nodes available. Initially, as the cluster is spun up, an "AWS provisioning issue" message is displayed.
Once the available nodes have been provisioned, the cluster goes into the Running state and the "AWS provisioning issue" message remains displayed. You can click Details in the message to see how many nodes were provisioned. You can try resizing the cluster to the size you originally wanted. See Resizing an HPC Cluster on Demand in the User's Guide. (724987)
When you attempt to resize a cluster, the cluster appears to update when trying to resize even when no additional nodes are available to meet the request.
When you attempt to resize a cluster, the original number of nodes requested is displayed instead of the number of nodes requested for the resizing.
Occasionally, Ansys Gateway powered by AWS may fail to create a valid Slurm cluster. The sinfo command may report that the state of one or more nodes is
unk*
(unknown), which indicates that the node cannot be reached. This usually occurs if the Slurm controller was previously connected to another cluster (see next limitation).To fix the issue of nodes being unreachable, restart the cluster.
To connect the cluster to the controller, follow the steps in Workaround for Reusing a Slurm Controller in the Recommended Usage Guide. (695491)
When a Slurm cluster is deleted and subsequently recreated on the same controller, the old cluster nodes may not get cleaned up right away. In this case, the old nodes remain listed in the slurm.conf file, and the sinfo command reports the nodes with a state of
*unk
(unknown).Before creating a new cluster, always wait two to three minutes after stopping or deleting the original cluster to allow the original cluster to be removed from the Slurm controller.
If you are currently experiencing this issue you can resolve it by following the steps in Workaround for Reusing a Slurm Controller in the Recommended Usage Guide. (695496)
Cluster nodes may not be fully ready even though the Overall state reported for the nodes is 'Ready' in the node details of the HPC cluster tile. In this case, jobs submitted to the cluster will fail. Only the controller used for job scheduling knows if all nodes are ready to perform work. Before submitting a job, always check the state of nodes from the cluster controller. To view the state of nodes in a Slurm cluster, for example, use the sinfo command on the Slurm controller virtual machine. Slurm nodes are ready when their state is shown as 'idle'.
Submitting jobs to a Slurm cluster with EFA may result in EFA errors. For example:
libfabric:68531:1697444289:efa:mr:efa_mr_reg_impl():370<warn> Unable to register MR: Unknown error -12 libfabric:68531:1697444289:efa:cq:rxr_ep_grow_rx_pkt_pools():1518<warn> cannot allocate memory for EFA's RX packet pool. error: Cannot allocate memory libfabric:68531:1697444289:efa:eq:efa_eq_write_error():973<warn> Writing error Unknown error -12 to EQ. libfabric:68531:1697444289:efa:eq:efa_eq_write_error():989<warn> Unable to write to EQ: Missing or unavailable event queue. err: Unknown error -12 (-12) prov_errno: Unknown error -12 (-12)
If this occurs, add the --propagate=NONE option when using the srun and sbatch commands.
Note that this issue only occurs with existing clusters and controllers that were created with previous versions of Slurm applications. If you create a new cluster setup with the latest versions of Slurm applications, you will not need to set the --propagate option. (874384)
In some cases, connecting to an HPC cluster may fail. Connection to clusters is possible only from a Windows or Linux virtual machine in Ansys Gateway powered by AWS which is in the same region as the cluster. All resources involved in an HPC workflow (including the file storage server if used) must be in the Running state. Some application workflows require that you share a working directory on the job submission virtual machine with the nodes in the cluster. For more information, refer to the application-specific workflows in Recommended Configurations by Application in the Recommended Usage Guide.
When a cluster is starting, and you make a request to delete it, only the provisioned nodes are deleted. The remaining nodes are still created in Ansys Gateway powered by AWS and this is not visible in the customer portal. If you need to delete the cluster before it is fully provisioned, make sure to delete all the nodes you requested from the AWS console.
Storage
In 24R2+ project spaces, the Mount folder action on the Storage page in the project space settings should not be available and should not be used. This action allows you to mount shared folders from file storage servers in 24R1- project spaces. This is not supported in 24R2+ project spaces. Shared folders mounted to a 24R2+ project space will not be mounted on virtual desktops in that project space.
If a project space contains two NFS file storage servers, and each has a shared folder with the same name, only the shared folder on the file storage server that was created first will get mounted on resources in that project space. To avoid this issue, ensure that the names of shared folders on one file storage server are different than those on another file storage server in the project space. (873526)
Hardware
When selecting hardware for a resource, instance types that are not supported in the specified Availability Zone can be selected, resulting in resource creation failure. (1195704)
p3, p4, and p5 instances are not supported for Linux virtual desktops or clusters. (949142, 971642)
Metal instance sizes (for example, c6i.metal) are not supported or recommended.
Ansys Electronics Desktop
When an Icepak simulation that specifies the use of more than one compute node is submitted to an Ansys Electronics Desktop 2024 R2 autoscaling cluster, Icepak fails to run. (1214623)
In virtual desktop sessions, Ansys Electronics Desktop can only be run on Windows-based virtual desktops.
When creating and connecting to an Ansys Electronics Desktop cluster, the 'Setting up Cluster' and command prompt windows remain displayed even after the Submit Job dialog appears. Do not close these windows as this will also close the Submit Job dialog. Instead, minimize the windows or move them to another area of the screen.
In cluster workflows, Ansys Electronics Desktop is not intended to be run graphically on a Linux machine in Ansys Gateway powered by AWS. To run AEDT graphically in Ansys Gateway powered by AWS, use a Windows virtual desktop.
If Ansys Electronics Desktop is going to be installed along with other applications intended to be used graphically on the same VM (through RDP or VNC), then the packages should be installed in this order with the XRDP and Gnome packages included:
- Ansys Electronics Desktop
- XRDP for GNOME
- Gnome Desktop (latest version)
- Other application(s) that use RDP or VNC
If a Linux interface is required for a workflow that involves both Ansys Electronics Desktop (AEDT) and optiSLang, use the optiSLang interface instead of running AEDT graphically.
Ansys Fluids
When using Open MPI for jobs submitted to a Fluids autoscaling cluster, running Fluent with Elastic Fabric Adapter (EFA) does not work. The job will run, but not at EFA speeds.
When connected to a Windows virtual desktop with Ansys Fluids 2024 R2 installed, the Ansys Dynamic Reporting (ADR) template editor fails to start. (1185795)
Using Fluent for graphics visualization on a Linux virtual desktop with a Tesla driver and GNOME Desktop may result in display issues. (1198261)
Tesla drivers should not be used for graphics visualization. If accelerated graphics are desired, use a GRID driver.
On a Linux virtual desktop, use GNOME Desktop with the GRID driver, as KDE does not support accelerated graphics.
- Ansys Gateway powered by AWS does not support Slurm accounting. As a result:
- When submitting a Fluent job to a Slurm cluster using the Fluent Launcher, a "WARNING: SLURM account not provided, using default" message is displayed in the console. This warning can be ignored.
- Upon completion of a CFX solution, there is a delay before the Solution Complete dialog appears. This is due to CFX running a Slurm accounting check (that eventually times out).
Fluent 2023 R2 and 2023 R1 meshing jobs must be submitted to a Slurm cluster using the command line or the Fluent interface. If a Slurm script is used, a CAD import error may occur. (732840)
Workaround: To resolve this dependency you can install tcsh on all cluster nodes (yum install tcsh) as root.
Sometimes, when using Ansys Fluent to interact with a Fluids Slurm cluster, the Fluent interface may get stuck. To resolve this issue, try restarting the Slurm controller and cluster. (851798)
Launching Fluent Aero or Fluent Icing from the Fluent Launcher in Ansys Gateway powered by AWS results in a blank viewer. Although Fluent versions 2023 R2 and later are set to use OpenGL for graphics display, this setting is not emulated in Aero or Icing, resulting in display issues. (917420)
To work around this issue, see Running Fluent Aero and Fluent Icing (2023 R2 and Later) in the Recommended Usage Guide.
Some simulation reports may not be generated properly in Ansys Fluent. The virtual machine must have a GPU and GRID driver. (634828/974826)
Running Ansys Fluent with the X11 graphics driver on a Linux virtual desktop with KDE Desktop Environment is not supported. The graphics window cannot be properly rendered. (725917)
When running EnSight on a Linux virtual desktop, transferring files from an external shared drive on an Isilon storage system may be problematic depending on the data format. To avoid or work around this issue, transfer files directly to the virtual desktop and/or an AWS file storage server with shared NFS storage.
Running EnSight 2023 R1 or earlier in a Slurm HPC cluster is not fully supported. However, Slurm may work if you install ksh on all Slurm cluster nodes.
Ansys Granta
When creating a virtual desktop with Granta MI Pro, the Granta MI Pro application will not install if there any pending system restarts. You must create a virtual desktop without adding applications, restart the virtual desktop, and then add the Granta MI Pro application.
Ansys HPC Platform Services
Unmounting a storage from a project space unmounts the storage from virtual machines where Ansys HPC Platform Services is installed. As a result, any jobs submitted to Ansys HPC Platform Services will fail to get evaluated unless the storage is remounted. (1245890)
Ansys Mechanical/Workbench
When a Mechanical project is located on a shared OpenZFS storage in a project space, the project cannot be opened when using a virtual desktop with Ansys Mechanical Enterprise 2024 R2 Service Pack 3. Attempting to open the project results in the error 'Mechanical failed to open the database: Unable to open file <pathTofile> Unable to get model'. (1213199)
Workaround: Copy the project files from the shared OpenZFS storage to the local machine and open the project from there.
When a Workbench project requiring a geometry update in Discovery is submitted to a Mechanical autoscaling cluster, the job may fail. (1219948)
Likely cause: The Ansys Product Improvement Program may be preventing Discovery from running.
Workaround: Open Discovery on any machine and close the Ansys Product Improvement Program dialog. This only needs to be done one time.
Multi-node linked analyses submitted to an autoscaling cluster do not complete successfully when the cluster is configured to use local scratch (enabled by default for Mechanical APDL clusters) and compute nodes have a local scratch drive. (1214529)
To avoid or resolve this issue you can use any of the following workarounds:
Select an instance that does not have a local scratch drive (a network drive will be used instead)
Set Local Scratch to false for the Ansys Mechanical APDL application in the cluster properties (using Ansys HPC Manager)
Solve on one node only
Solve each analysis separately (solve the upstream analysis first, then after it finishes, solve the downstream analyses, one at a time)
Port 443 is not available on Windows virtual desktops, which may result in a port conflict when attempting to run DCS for Design Points. To work around this issue, you can configure DCS to use a different port. For instructions, see Design point update results in an error. (588775)
In clusters with AMD processors, the Mechanical APDL 2023 R2 solver can show instability at higher core counts.
Ansys ModelCenter
ModelCenter Remote Execution is not supported on Linux. Support will be added in a future release. (812043)
Ansys Speos
Submitting a job from a Slurm Controller machine to a Speos HPC cluster with multiple GPU instances will fail if Intel MPI 2021.6 is being used. In this scenario you must use Intel MPI 2018.3.222. (866096)
Tags
Creating tags with any of the following names will cause resources to be in a 'Failed State' upon creation: BackendIdentifier, Name, ProjectSpaceId, Source, TenantId, UserId, VMid. Creating tags with these names will result in a 'Duplicate key tag specified' error as these tag names are reserved for system use.
General Usability
Internet Explorer is not supported.
Occasionally, the following error may appear while you are working in Ansys Gateway powered by AWS:
Click OK to reload the page. In some cases, you may be prompted to sign in again. (912801)