Expand/Collapse all
Introduction
Assumptions
Planning your Deployment
Overview of Ansys HPC Platform Services
Key Concepts
Cloud-Native Services
Agent-Based Architecture
Autoscaling
Authentication and Authorization
Job Submission Impersonation in Traditional HPC Environments
Overview of Services
Ansys HPC Platform Core Services
Workers
Evaluators
Autoscalers
Process Launcher
Role of the Process Launcher
How the Process Launcher Works
Deploying the Core Services
Overview of Core Services Deployment
Deploying Core Services with Docker Compose
Hardware Requirements
Deploying Core Services on a Machine with Internet Access Using Docker Compose
Deploying Core Services on a Machine Without Internet Access Using Docker Compose
Deploying Core Services to Kubernetes Using Helm
Deploying Core Services Using Helm Charts
Updating the Helm Chart Installation
Uninstalling the Helm Chart
Creating a Pull Secret
Getting a List of Pods
Configuring Helm Parameter Values
Deploying Core Services to Windows Subsystem for Linux (WSL)
Step 1: Enable WSL 2 and Virtual Machine Platform (Explicitly)
Step 2: Set WSL2 as the Default Version
Step 3: Install Linux on the Windows Machine
Step 4: Start Linux
Step 5: Ensure that systemd is Running
Step 6: Install Docker CE
Step 7: Install Ansys HPC Platform Core Services
Deploying Certificates to Traefik
Using Auto-Generated Certificates
Using Third-Party Certificates
Configuring Traefik to Look for Certificates with a Different Name
Accessing Ansys HPC Platform Services
Next Steps after Deploying Core Services
Deploying Static Evaluators
Setting Up a Static Evaluator
Automatic Detection of Applications on an Evaluator
Manually Adding Applications to an Evaluator's Configuration
Stopping an Evaluator
Deploying Autoscaling Clusters
Supported Job Schedulers and Orchestrators
Deploying the Autoscaling Service
Install the Autoscaling Service
Configure the Autoscaling Service
Start the Autoscaling Service
User Impersonation in HPC Workflows
Configuring the Autoscaling Service with Impersonation
Configuring the Autoscaling Service Without Impersonation
Editing Autoscaling Cluster Properties
Editing Autoscaler Properties via the Autoscaler Configuration File
Editing Autoscaler Properties via the Ansys HPC Manager Web App
Autoscaling Cluster Properties
Automatic Detection of Applications in an Autoscaling Cluster
Manually Adding Applications to the Autoscaling Cluster Configuration
Adding Applications via the Autoscaling Configuration File
Adding Applications via the Ansys HPC Manager
Setting Up Evaluators for an Autoscaling Cluster
Setting Up Evaluators for a Traditional HPC Environment
Setting Up Evaluators for a Kubernetes Environment
Deploying the Process Launcher
Deploying the Process Launcher Service
Install the Process Launcher Package
Configure the Process Launcher Service
Start the Process Launcher Service
Viewing the Process Launcher Service Log
Authentication and Authorization
Accessing the Keycloak Admin Console
Generating a New Client Secret
Authentication when Accessing the Ansys HPC Manager
Using the Default Credentials for Sign-In
Changing the Default Credentials
Adding Users to the System
Integrating your Company’s IAM System
Deployment in Ansys Access on Microsoft Azure
Deployment in Ansys Gateway powered by AWS
Autoscaling Strategies Explained
Autoscaling with Traditional Job Schedulers
Autoscaling with KEDA
Frequently Asked Questions
What are the different core services in Ansys HPC Platform Services, and what does each one do?
What is an 'Evaluator' in Ansys HPC Platform Services?
What is a 'Scaler' or 'Autoscaler' in Ansys HPC Platform Services?
What is the 'Process Launcher' service and why is it necessary, especially for on-premises HPC?
When I run the Docker Compose deployment directly from a shared file system like NFS or Lustre, some containers fail with 'permission denied' errors, especially when accessing mapped configuration files or scripts. What causes this?
How can I deploy Ansys HPC Platform Services on an air-gapped node?
I have defined my desired working directory path in the FILE_STORAGE variable within the .env file, but the HPC containers using the file_storage volume don't seem to write data there. Where is the data going, and how do I fix it?
How can I configure the Data Transfer Service container to use a specific network share or host directory for its project file staging area instead of using its default location?
When transferring many or large jobs using Ansys HPC Manager, my host's /var partition fills up. Why does this happen and what are the solutions?