Choosing an Optimizer
Conducting an optimization analysis lets you determine an optimum solution for your problem. In Twin Builder optimization analyses, you have the choice of different optimizers, though in most cases, we recommend the Sequential Nonlinear Programming optimizer.
- Sequential Nonlinear Programming (Gradient) (SNLP)
- Merit-based Sequential Quadratic Programming (Gradient) (MBSQ)
- Sequential Mixed Integer NonLinear Programming (Gradient and Discrete) (SMINLP)
- Quasi-Newton (Gradient)
- Pattern Search (Search-based)
- Genetic Algorithm (Random search)
- MATLAB Optimizer
Additional Optimizers
These use a decision support process (DSP) based on satisfying criteria as applied to the parameter attributes using a weighted aggregate method. In effect, the DSP is a postprocessing action on the Pareto fronts as generated from the results of the various optimization methods.
- Screening (Search based) – This is a non-iterative direct sampling method that uses a quasi-random number generator based on the Hammersley algorithm. You can start with Screening to locate the multiple tentative optima, then refine with NLPQL or MISQP to zoom in on the individual local maximum or minimum value. Usually Screening is used for preliminary design, which can lead you to apply one of the other approaches for more refined optimization results.
- Multi-Objective Genetic Algorithm – This iterative random search algorithm optimizes problems with continuous input parameters. It is better for calculating the global optima. You can start with MOGA to locate the multiple tentative optima, then refine with NLPQL or MISQP to zoom in on the individual local maximum or minimum value.
- Nonliner Programming by Quadratic Lagrangian (Gradient) – This is a gradient-based, single-objective optimizer based on quasi-Newton methods. Ideally suited for local optimization.
- Mixed-Integer Sequential Quadratic Programming (Gradient and Discrete) – This is a gradient-based, single-objective optimizer that solves mixed-integer non-linear programming problems by a modified sequential quadratic programming (SQP) method. Ideally suited for local optimization.
- Adaptive Multiple Objective (Gradient) – This is an iterative, multi-objective optimizer that employs a Kriging response surface and Multi-Objective Genetic Algorithm (MOGA). In this method, the use of a Kriging response surface allows for a more rapid optimization process because all design points are not evaluated except when necessary and part of the population is simulated by evaluations of the Kriging response surface, which is constructed of all design points submitted by MOGA.
- Adaptive Single Objective (Gradient) – This is a gradient-based, single-objective optimizer that employs an OSF (Optimal Space-Filling) DOE, a Kriging response surface, and MISQP.
All optimizers assume that the nominal problem you are analyzing is close to the optimal solution; therefore, you must specify a domain that contains the region in which you expect to reach the optimum value.
All optimizers let you define a maximum limit to the number of iterations to be executed. This prevents you from consuming your remaining computing resources and lets you analyze the obtained solutions. From this reduced range, you can further narrow the domain of the problem and regenerate the solutions.
All optimizers also let you enter a coefficient in the Add Constraints dialog box to define the linear relationship between the selected variables and the entered constraint value. For the SNLP and NMINLP optimizers, the relationship can be linear or nonlinear. For the quasi-Newton (Gradient) and Pattern Search (Search-based) optimizers, the relationship must be linear.
Cost functions can be quite nonlinear. As a result, during the function evaluations of the algorithm, the cost function can vary significantly. It is important to understand the relationship between optimization function evaluation and iteration. Every iteration, depending on the number of parameters to be optimized, performs several function evaluations. These function evaluations, depending on how nonlinear the cost function is, could show drastic changes. The presence of drastic changes has no bearing on whether the optimization algorithm converged or not.
In the case of non-gradient search-based optimization algorithms, such as "pattern search," which are entirely based on function evaluations, one could see drastic changes in the function evaluations depending on how nonlinear the cost function is. This could seem misleading as if the algorithm did not converge since in theory one expects the cost function to decrease from one iteration to the next. The optimetrics, however, reports function evaluations and not necessarily the optimizer performance per iteration.
The MATLAB optimizer displays function evaluation when you select the Show all functions evaluation check box. If you clear the check box, it displays iteration.