48.4. The Mesh Morpher/Optimizer

The mesh morpher/optimizer offers a variety of non-gradient-based optimization algorithms to optimize the geometric shape of a system [72]. The optimization can be performed for a wide variety of physics and quantities of interest to be optimized.

For additional information, see the following sections:

48.4.1. Limitations

Note the following limitations of the Ansys Fluent mesh morpher/optimizer:

  • The mesh morpher/optimizer should not be used with dynamic or sliding mesh problems.

  • Any geometrically evaluated parameters required for flow iterations (for example, view factors for the surface to surface (S2S) radiation model) will need to be recomputed at the very least when the optimization is complete. If the objective function is affected by one of these geometrically evaluated parameters, then such parameters will need to be recomputed after each design change; for example, the view factors need to be recomputed after each design change when the objective function is a function of temperature.

  • Arbitrarily shaped deformation regions are not supported.

48.4.2. The Optimization Process

All optimization problems require that you identify parameters that can be modified in order to reach the optimized solution. In the case of the mesh morpher/optimizer, it is the geometry that must be parameterized. Geometric parameterization for general shapes used in CFD can be very complicated, due to the large variety of shapes available in engineering applications. In order to minimize such complications in your Ansys Fluent simulation, the problem of shape parameterization is reduced to a problem of the parameterization of changes in the geometry.

The next essential requirement for mesh morphing is a tool that can smoothly alter the shape, irrespective of the underlying mesh topology. In Ansys Fluent, designated deformation regions are manipulated via displacements applied to a set of control points. The mesh region that is to be deformed is defined by a “box” (that is, a rectangle for 2D cases and a rectangular hexahedron for 3D), and the control points must be located within the box. The displacements of the control points are the result of user-defined motions (each of which involves a parameter value and other directional settings) and these displacements are then applied to the mesh as a smooth deformation by either interpolating the displacement based on radial basis functions or using the tensor product of Bernstein polynomials.

The next requirement is to have an optimizer that is robust enough to handle a wide range of problems. By coupling such optimizers with your CFD analysis, you can greatly improve your design with minimal intervention. You can use built-in optimizers to vary the parameter values along the prescribed directions and within defined bounds, in order to satisfy a condition specified by an objective function. The mesh morpher/optimizer provides you with access to six optimizers that are not based on gradients. Otherwise, you can manually specify the deformation (that is, define both the parameter values and the directions) and analyze the results; you also have the option of using Design Exploration in Ansys Workbench to easily explore the impact of a variety of parameter values.

48.4.3. Optimizers

The built-in optimizers used as part of the mesh morpher/optimizer capability in Ansys Fluent use direct search methods for optimization. Direct search methods are zeroth order, as they only use the objective function values for optimization. The direct search methods do not use the derivatives.

The following is a list of advantages of direct search methods:

  • Direct search methods do not require derivatives for optimization.

  • Direct search methods are robust for problems with discontinuities and in situations where the derivative computation is not possible or unreliable.

The following is a list of disadvantages of direct search methods:

  • Convergence proof is not clearly defined.

  • The rate of convergence can be very slow.

48.4.3.1. The Compass Optimizer

In the Compass optimizer [80], the parameters are adjusted one by one until the objective function is minimized. This optimizer starts with a given value and then evaluates the function value in all the basic directions. The direction here refers to the positive and negative increments to the initial parameter values. If there is a reduction in the function value, then that point becomes an improved point. If there is no improvement in the function values, then the step length is reduced by half and the search is repeated in all directions. The algorithm terminates when the step size falls below a certain tolerance.

The Compass optimizer initially makes rapid progress towards the solution. While this method might quickly approach the minimum value of an objective function, it may be slow to detect this fact. It may also converge very slowly if the level sets of the objective function are extremely elongated.

48.4.3.2. The NEWUOA Optimizer

The NEW Unconstrained Optimization Algorithm (NEWUOA) optimizer [124] attempts to find the least value of a function , where is a parameter vector of dimensions. At the start of every iterative step, a quadratic model approximation () is constructed and the minimization is performed on a trust region (that is, a region around the parameter that is limited by an initial parameter variation). The perturbation to the parameter value needed to obtain the lowest value of is evaluated during the iteration, and correspondingly, the new least value of is obtained. The iterative process continues until the trust region is reduced to the optimizer convergence criterion, and the optimizer exits with the least value of the objective function. This optimization algorithm can be applied to any setup case of the mesh morpher/optimizer.

The NEWUOA optimizer provides the least value of the objective function in a manner that is highly accurate and robust. The main advantage of this optimizer is that it is very fast; hence, this optimizer is recommended for problems that have a large number of parameters.

48.4.3.3. The Simplex Optimizer

The Simplex optimizer is also referred to as the Downhill Simplex optimizer [98], [72] and the Nelder-Mead method [105]. It is based on the idea of geometric simplexes; for example, a 2D simplex is a triangle, and a 3D simplex is a tetrahedron.

For optimization purposes, Ansys Fluent requires that simplexes are regular polyhedra (that is, not degenerate polyhedra with collapsed sides). Each vertex of the geometric simplex represents one function evaluation (which in this case is one CFD run), and the number of vertices corresponds to the number of parameters. For the free-form deformation method that is used by Ansys Fluent to parameterize changes in shapes, the number of active control points will determine the number of vertices of the geometric simplex.

Minimization of the objective function is performed based on a set of the rules about the “quality” of each vertex. The vertex quality is the value of the function evaluated for each position of the control point. A set of geometric operations such as reflection, expansion, contraction, and shrinking are performed in order to find the region in which to look for the minimum of the function. Because optimization here is formulated as a minimization problem, the simplex optimizer algorithm seeks the “worst” vertex, that is, the vertex that has the largest value when the corresponding parameter is evaluated. By performing the reflection around the center of the gravity, the new value of the function is obtained after performing the CFD run. Similarly, the operations of expansion, contraction, and shrinking are used to obtain the minimum of the function.

The Simplex optimizer is known to work well, but it suffers from the large number of function evaluations. It also requires smooth objective functions for convergence.

48.4.3.4. The Torczon Optimizer

The Torczon optimizer [165] is a slightly modified version of the simplex optimizer described previously. Given an initial vertex, this optimizer tries to find a better vertex that has a function value that is strictly less than the function value at the previous best vertex. There are three possible trial steps: the rotation step, the expansion step, and the contraction step. The algorithm always computes the rotation step and then tests to see if a new best vertex has been identified. If it has, then the expansion step is computed. Otherwise, the algorithm computes and automatically accepts the contraction step.

48.4.3.5. The Powell Optimizer

For the Powell optimizer, the method used is based on the number of dimensions of the problem. For optimization problems of single dimension (that is, problems with a single parameter), the golden section search algorithm [125] is used to find the optimal value for the objective function. In this algorithm, the optimal value is found by reducing the size of the bracketing triplet, until the size of the bracket (that is, the distance between the outer points of the triplet) reaches a certain tolerance level. In all the cases, the middle point of the new triplet is determined to be the best value obtained so far.

For multidimensional problems (that is, problems with multiple parameters), the minimum value and the largest decrease is found using the golden search algorithm for each dimension, in order to find the conjugate directions. A conjugate direction is a direction that when searched will not alter the minimum value attained by the previous movement in another direction—that is, a direction in which the gradient is perpendicular to the first direction. After finding the N linearly independent, mutually conjugate directions, one pass will find the exact minimum value. For functions that are not quadratic, repeated cycles of N line minimizations will converge to minimum.

48.4.3.6. The Rosenbrock Optimizer

In the Rosenbrock optimizer [132], after the initial direction is found, multiple steps are taken in that direction until the least value is attained. The process starts with an arbitrary length . If this initial step succeeds (that is, the new value of the function is less than or equal to the old value), the length is multiplied by , where is more than 1. If the steps fails, the length is multiplied by , where is between -1 and 0. The direction or the parameter that must be modified is determined by advancing all the parameters by the step length and then selecting the best among those that yield a function value that is less than the previous value. After that point is accepted as the best point, the process is repeated. These steps continue to repeat until becomes so small so that any further change in the value of does not significantly reduce the value of the function.