3.3. One-Click Optimization

The One-Click Optimization (OCO) method is a hybrid and dynamic optimization approach developed by Ansys. It efficiently combines direct (high-fidelity model) and MOP (low fidelity model) assisted search strategies. OCO is a general purpose optimizer that automatically and iteratively selects the most suitable optimization methods exposed by optiSLang.

OCO considers the allowable optimization methods available in optiSLang such as NLPQL, simplex, EA, and so on. It tackles the shortcomings of most selection scheme by using a dynamic selection scheme that starts by exploring the design space to study the statistical features of the response(s). An in-house selection criterion called the success factor is then introduced to compare the optimization algorithms scheme. This success factor considers the number of evaluations, and the estimated improvement in terms of objectives and feasibility.

One of the most innovative aspect of OCO is its dynamic and adaptive nature. In fact, It allows depending on the behavior of the algorithms, either the successive use of different algorithms, or the use of one algorithm. For example, if an algorithm is performing well OCO keeps using it. On the contrary, if another algorithm looks more promising, OCO switches dynamically to the expected more promising algorithm.

This dynamic switch system can help avoid local minima, arbitrary changes, and the overhead that comes with a manual selection.

By default, OCO exposes one main setting controlling the maximum number of evaluations. The type of surrogate models to build the MOP, and the list of allowable optimization methods are predefined. Advanced settings and expert settings are available to customize OCO.