Collapse
/
- 1. Introduction
- 2. Choosing a Virtual Machine
- 3. NVIDIA GPU Driver Support
- 4. Application Support
- 5. Recommended Hardware for VDI Workflows
- 6. Recommended Hardware for HPC Workflows
- 7. Requirements and Best Practices for Slurm Autoscaling Clusters
- 8. Requirements and Best Practices for Slurm HPC Clusters
- 9. Recommended Configurations by Application
- 9.1. Ansys Discovery Virtual Desktop
- 9.2. Ansys Electronics Desktop Configurations
- 9.2.1. AEDT Virtual Desktop
- 9.2.2. AEDT Autoscaling Cluster Workflow
- 9.2.3. AEDT HPC Cluster (2024 R1 or 2023 R2)
- 9.2.3.1. Step 1: Create Shared NFS Storage with the AEDT Installation
- 9.2.3.2. Step 2: Create an AEDT Cluster
- 9.2.3.3. Step 3: Connect to the AEDT Cluster from an AEDT Virtual Desktop
- 9.2.3.4. Step 4: Submit a Job to the AEDT Cluster
- 9.2.3.5. Step 5: Monitor the Job
- 9.2.3.6. Step 6: Retrieve Output Files
- 9.2.3.7. Limitations
- 9.2.4. AEDT HPC Cluster (2023 R1)
- 9.3. Ansys EMA3D Virtual Desktop
- 9.4. Ansys Fluids Configurations
- 9.4.1. Virtual Desktop with Ansys Fluids
- 9.4.2. Ansys Fluids Autoscaling Cluster
- 9.4.2.1. Step 1: Create an Ansys Fluids Autoscaling Cluster
- 9.4.2.2. Step 2: Create an Ansys Fluids Virtual Desktop for Job Submission
- 9.4.2.3. Step 3: Copy Input Files to the Shared Storage on the Virtual Desktop
- 9.4.2.4. Step 4: Submit a Job to the Ansys Fluids Autoscaling Cluster
- 9.4.3. Ansys Fluids HPC Cluster (2024 R1 or Earlier)
- 9.4.3.1. Creating Resources for a Fluids HPC Cluster Workflow
- 9.4.3.2. Important Notes about the Fluids HPC Cluster Workflow
- 9.4.3.3. Transferring Files to/from the Slurm Controller Virtual Desktop
- 9.4.3.4. Submitting Jobs to a Fluids HPC Cluster
- 9.4.3.5. Resizing the HPC Cluster
- 9.4.3.6. Optional: Connecting to the Slurm Controller Machine via SSH
- 9.5. Ansys Granta MI Pro Configuration
- 9.5.1. Deployment Overview
- 9.5.2. Steps for Configuring an Ansys Granta MI Pro Virtual Desktop
- 9.5.2.1. Step 1: Create a New Virtual Desktop
- 9.5.2.2. Step 2: Restart the Virtual Desktop
- 9.5.2.3. Step 3: Install Prerequisite Software for Ansys Material Calibration
- 9.5.2.4. Step 4: Add the Granta MI Pro Application
- 9.5.2.5. Step 5: Verify the Granta MI Pro Installation
- 9.5.2.6. Step 6: Add MI Pro System Users
- 9.5.2.7. Step 7: Install the Granta MI Materials Gateway client software
- 9.6. Ansys Granta Selector Virtual Desktop
- 9.7. Ansys LS-DYNA Configurations
- 9.8. Ansys Lumerical Configurations
- 9.9. Ansys Mechanical Configurations
- 9.9.1. Windows Virtual Desktop with Ansys Mechanical Enterprise
- 9.9.2. Ansys Mechanical Autoscaling Cluster Workflow
- 9.9.2.1. Step 1: Create an Ansys Structures Autoscaling Cluster
- 9.9.2.2. (Optional) Step 2: Create a Virtual Desktop for Mechanical Job Submission
- 9.9.2.3. Step 3: Submit a Job to the Ansys Structures Autoscaling Cluster
- 9.9.2.3.1. Specify a Default Queue for Mechanical Jobs
- 9.9.2.3.2. Submit a Mechanical Job to HPC Platform Services from Mechanical
- 9.9.2.3.3. Submit a Mechanical Job to HPC Platform Services from Workbench
- 9.9.2.3.4. Submit a Mechanical Job to HPC Platform Services using the HPC Manager Web App
- 9.9.2.3.5. Submit a Mechanical Job to HPC Platform Services Using Python
- 9.9.3. Mechanical Ansys RSM Cluster (2024 R1 or Earlier)
- 9.9.4. DCS Cluster for Workbench Mechanical (2024 R1 or Earlier)
- 9.9.4.1. Step 1: Create a Virtual Desktop with the DCS Design Point Service
- 9.9.4.2. (Optional) Step 2: Create a Virtual Machine with Shared NFS Storage
- 9.9.4.3. Step 3: Create a DCS Cluster/Evaluator
- 9.9.4.4. Step 4: Use the DCS for Design Points Service
- 9.9.4.5. Automatic Reconfiguration of DCS Evaluators
- 9.10. Ansys Medini Virtual Desktop
- 9.11. Ansys ModelCenter Virtual Desktop Configurations
- 9.12. Ansys Motion Virtual Desktop
- 9.13. Ansys Motor-CAD Virtual Desktop
- 9.14. Ansys nCode Virtual Desktop
- 9.15. Ansys optiSLang Configurations
- 9.15.1. Connector Support in optiSLang Workflows
- 9.15.2. Ansys optiSLang Virtual Desktop
- 9.15.3. Ansys optiSLang Workflow with an Ansys Mechanical DCS Cluster (2024 R1 or Earlier)
- 9.16. Ansys PathFinder-SC Autoscaling Cluster Workflow
- 9.17. Ansys RaptorH/RaptorX/VeloceRF Virtual Desktop
- 9.18. Ansys RedHawk-SC Configurations
- 9.18.1. Ansys RedHawk-SC Autoscaling Cluster Workflow
- 9.18.2. Ansys RedHawk-SC HPC Cluster Workflow (2024 R1 or Earlier)
- 9.18.2.1. General Guidelines for RedHawk-SC Workflows
- 9.18.2.2. Virtual Machine with Shared NFS Storage
- 9.18.2.3. RedHawk-SC HPC Cluster
- 9.18.2.3.1. Step 1: Create a Virtual Desktop with Slurm Controller and Ansys Semiconductor CentOS 7
- 9.18.2.3.2. Step 2: Create the Slurm Cluster for RedHawk-SC Workers
- 9.18.2.3.3. Step 3: Prepare a Virtual Desktop for Simulation Work
- 9.18.2.3.4. Transferring Files to/from the Slurm Controller Virtual Desktop
- 9.18.2.3.5. Resizing the Cluster
- 9.18.2.3.6. Optional: Connecting to the Slurm Controller Machine via SSH
- 9.18.2.4. Virtual Desktop Slurm Node with Ansys RedHawk-SC
- 9.19. Ansys Rocky Virtual Desktop
- 9.20. Ansys SCADE Virtual Desktop
- 9.21. Ansys Speos Configurations
- 9.22. Ansys Totem-SC Configurations
- 9.22.1. Ansys Totem-SC Autoscaling Cluster Workflow
- 9.22.2. Ansys Totem-SC HPC Cluster Workflow (2024 R1 and Earlier)
- 9.22.2.1. General Guidelines for Ansys Totem-SC Configurations
- 9.22.2.2. Virtual Machine with Shared NFS Storage
- 9.22.2.3. Totem-SC HPC Cluster
- 9.22.2.3.1. Step 1: Create a Virtual Desktop with Slurm Controller and Ansys Semiconductor CentOS 7
- 9.22.2.3.2. Step 2: Create the Slurm Cluster
- 9.22.2.3.3. Step 3: Prepare a Virtual Desktop for Simulation Work
- 9.22.2.3.4. Transferring Files to/from the Slurm Controller Virtual Desktop
- 9.22.2.3.5. Resizing the Cluster
- 9.22.2.4. Virtual Desktop Slurm Node with Ansys Totem-SC
- 9.23. Ansys Workbench LS-DYNA Configurations
- 9.23.1. Ansys Workbench LS-DYNA Autoscaling Cluster Workflow via HPC Platform Services
- 9.24. Ansys Zemax OpticStudio Virtual Desktop
- 10. Ansys HPC Platform Services
- 10.1. Overview of Ansys HPC Platform Services
- 10.2. Integrating Ansys HPC Platform Services with Autoscaling Clusters
- 10.3. Do I Need a Docker Account to Install HPC Platform Services?
- 10.4. Changing the Default Credentials for Accessing Ansys HPC Manager
- 10.5. Specifying a Default Queue for an Autoscaling Cluster (HPS Workflows)
- 10.6. Specifying Default Queues for Cluster Applications (HPS Workflows)
- 10.7. Submitting Jobs to Ansys HPC Platform Services
- 10.8. Launching the Ansys HPC Manager Web App
- 10.9. Selecting a Queue when Submitting a Job to Ansys HPC Platform Services
- 10.10. Downloading Result Files from Ansys HPC Manager
- 10.11. Installing PyHPS for Job Submission to Ansys HPC Platform Services Using Python
- 11. Best Practices for Managing Costs