2.2.4. Partitioner Tab

Use the Partitioner tab to configure mesh partitioning options for parallel runs. If this tab is not visible, ensure that in the Run Definition tab, Show Advanced Controls is selected.


Note:  Once started, the run progresses through partitioning, and then into the solution of the CFD problem. Extra information is stored in the CFX-Solver Output file for a parallel run. For details, see Partitioning Information in the CFX-Solver Manager User's Guide.


You can select a partition file (*.par) to load by setting Partition Option to Partition File, then clicking Browse   beside Initial Partition File. The partition file is only available if a model has already been partitioned. The number of partitions in the partition file must be the same as the number of processes (Processes) specified on the Run Definition tab.


Note:  An existing partition file cannot be used if the simulation involves either the Monte Carlo or Discrete Transfer radiation model. Partitions may be viewed prior to running CFX-Solver. For details, see CFX Partition File in the CFX-Solver Manager User's Guide.


Run Priority may be set to Idle, Low, Standard or High (Standard is selected by default). For a discussion of these priorities, see The cfx5control Application in the CFX-Solver Manager User's Guide.

2.2.4.1. Executable Settings

You can override the precision set on the Run Definition tab by selecting Override Default Precision and then setting the precision. For details on the precision of executables, see Double-Precision Executables.

You can override the problem size capability ("large problem" or not) set on the Run Definition tab by selecting Override Default Large Problem Setting and then setting the problem size capability (via the Large Problem check box). For details on the problem size capability of the executables, see Large Problem Executables.

2.2.4.2. Partitioning Detail

Under Partitioning Detail, you can specify various partition method options.

2.2.4.2.2. Partition Weighting

Configure how partitions are weighted between machines. Available Partition Weighting selections include:

  • Automatic (default): Calculates partition sizes based on the Relative Speed entry specified for each machine in the hostinfo.ccl file. Machines with a faster relative speed than others are assigned proportionally larger partition sizes.


    Note:  The entry of relative speed values is usually carried out during the CFX installation process. Parallel performance can be optimized by setting accurate entries for relative speed.


  • Uniform: Assigns equal-sized partitions to each process.


    Note:  Both Uniform and Automatic give the same results for local parallel runs; it is only for distributed runs that they differ.


  • Specified: Assigns custom-sized partitions to each process. This option requires that partition weights are specified on the Run Definition tab in the table under Parallel Environment in the Partition Weights column.


    Note:  When more than one partition is assigned to any machine, the number of partition weight entries must equal the number of partitions. The partition weight entries should be entered as a comma-separated list. Consider the following sample distributed run:

    Host

    # of Partitions

    Partition Weights

    Sys01

    1

    2

    Sys02

    2

    2, 1.5

    Sys03

    1

    1

    Sys01 has a single partition with weight 2. Sys02 has two partitions and they are individually weighted at 2 and 1.5. Sys03 has a single partition with weight 1.

    If partition weight factors are used, the ratio of partition weights assigned to each partition controls the partition size.


2.2.4.2.3. Multidomain Option

Configure how domains are partitioned. Available Multidomain Option selections include:

  • Automatic (default): If the case does not involve particle transport, this is the same as the Coupled Partitioning option; otherwise it is the same as the Independent Partitioning option.

  • Independent Partitioning: Each domain is partitioned independently.

  • Coupled Partitioning: All connected domains are partitioned together, provided they are the same type (that is, solid domains are still partitioned separately from fluid/porous domains). For details, see Optimizing Mesh Partitioning in the CFX-Solver Modeling Guide.


    Note:  Coupled partitioning is often more scalable, more robust and less memory expensive than independent partitioning because fewer partition boundaries are created. However, coupled partitioning may worsen the performance of particle transport calculations.


2.2.4.2.4. Multipass Partitioning

When Coupled Partitioning is activated, you can further choose to set the Multipass Partitioning option. The Transient Rotor Stator option is relevant only for simulations having transient rotor stator interfaces. It uses a special multipass algorithm to further optimize the partition boundaries. This approach generates circumferentially-banded partitions adjacent to each transient rotor stator interface, which ensures that interface nodes remain in the same partition as the two domains slide relative to each other. Away from the interface, the partitioning is handled using whichever method is specified for the Partition Type.

2.2.4.3. Partition Smoothing

Partition smoothing attempts to minimize the surface area of partition boundaries by swapping vertices between partitions. Partition smoothing reduces communication overhead and improves solver robustness. Smoothing is enabled by default, but may be disabled by changing Partition Smoothing > Option.

If smoothing is enabled, the algorithm will, by default, perform a maximum of 100 sweeps. The maximum number of smoothing sweeps can be specified by changing the value of Partition Smoothing > Max. Smooth. Sweeps. The smoothing algorithm will stop before this value is reached if it finds no improvement between successive sweeps.

For further details on partition smoothing, see Optimizing Mesh Partitioning in the CFX-Solver Modeling Guide.

2.2.4.4. Partition Node Weighting

The partitioning in CFX is node based; the partitioner distributes the nodes among the partitions. By default, the distribution does not consider the computational cost per node as a function of mesh element type (for example, tetrahedral or hexahedral), leading to potential load imbalances between different partitions; for example, two partitions with the same number of nodes could have different computational loads if they have different proportions of tetrahedral (more expensive) versus hexahedral (less expensive) elements. Such load imbalances between the partitions can greatly reduce parallel performance. In order to avoid or reduce such load imbalances – especially for meshes with mixed element types – you can apply a Partition Node Weighting option to influence the node counts among the partitions:

  • Uniform (default): No node weighting is applied. Each node is treated equally in the partitioner.

  • Element Connectivity: The partitioner tries to determine a node weighting by counting the number of connections for each node.

  • Element Type: The partitioner uses a fixed weighting for a node depending on the type of element associated with the node. The following default values are used for the four possible element types:

    • Hexahedral Weighting Factor = 1.0

    • Tetrahedral Weighting Factor = 2.5

    • Prismatic Weighting Factor = 1.5

    • Pyramidal Weighting Factor = 1.5


Note:
  • Although the application of partition node weighting is expected to improve the partition load balancing, it is not guaranteed that the final computational time is reduced.

  • The partitioning can have an impact on the linear solver; therefore the number of linear solver sweeps, as well as the convergence history, can change.


2.2.4.5. Pre-coarsening Control

This option automatically creates a coarse version of the mesh before partitioning that into the desired number of partitions. The coarsening process is then reversed to give the partitioning of the original mesh. This additional pre-coarsening step uses the same AMG technology as the solver, and allows the partitioning to be more 'aware' of the solution process. The method can be used to prevent partition boundaries from passing through areas of high aspect ratio cells, and can resolve convergence difficulties that this can cause, particularly with diffusion-only type equations.

There are two sub-options:

  • Target Blocks per Partition - default (1000)

    This defines how much coarsening is used before partitioning takes place. Reducing this number will reduce the coarsening.

  • Aspect Ratio Filter - default (off)

    This option introduces a filter such that elements with aspect ratio below a certain criterion value will not be included in the coarsening process. This restricts the process to the areas where it is likely to be most effective, and can also lead to smoother partition boundaries.

The Mesh Pre-coarsening option is compatible with all the basic partitioning methods, such as MeTiS, and Optimised Recursive Coordinate Bisection, but will have most impact on methods that are not purely geometrical and involve a connectivity graph. Note that the option carries significant overhead at partition-time, in terms of both memory and CPU usage.

2.2.4.6. Partitioner Memory

If required, you can adjust the memory configuration under Partitioner Memory. For details, see Configuring Memory for the CFX-Solver in the CFX-Solver Manager User's Guide.