Choosing an Optimizer

Conducting an optimization analysis lets you determine an optimum solution for your problem. In Twin Builder optimization analyses, you have the choice of different optimizers, though in most cases, we recommend the Sequential Nonlinear Programming optimizer.

Additional Optimizers

These use a decision support process (DSP) based on satisfying criteria as applied to the parameter attributes using a weighted aggregate method. In effect, the DSP is a postprocessing action on the Pareto fronts as generated from the results of the various optimization methods.

All optimizers assume that the nominal problem you are analyzing is close to the optimal solution; therefore, you must specify a domain that contains the region in which you expect to reach the optimum value.

All optimizers let you define a maximum limit to the number of iterations to be executed. This prevents you from consuming your remaining computing resources and lets you analyze the obtained solutions. From this reduced range, you can further narrow the domain of the problem and regenerate the solutions.

All optimizers also let you enter a coefficient in the Add Constraints dialog box to define the linear relationship between the selected variables and the entered constraint value. For the SNLP and NMINLP optimizers, the relationship can be linear or nonlinear. For the quasi-Newton (Gradient) and Pattern Search (Search-based) optimizers, the relationship must be linear.

Cost functions can be quite nonlinear. As a result, during the function evaluations of the algorithm, the cost function can vary significantly. It is important to understand the relationship between optimization function evaluation and iteration. Every iteration, depending on the number of parameters to be optimized, performs several function evaluations. These function evaluations, depending on how nonlinear the cost function is, could show drastic changes. The presence of drastic changes has no bearing on whether the optimization algorithm converged or not.

In the case of non-gradient search-based optimization algorithms, such as "pattern search," which are entirely based on function evaluations, one could see drastic changes in the function evaluations depending on how nonlinear the cost function is. This could seem misleading as if the algorithm did not converge since in theory one expects the cost function to decrease from one iteration to the next. The optimetrics, however, reports function evaluations and not necessarily the optimizer performance per iteration.

Note:

The MATLAB optimizer displays function evaluation when you select the Show all functions evaluation check box. If you clear the check box, it displays iteration.