Expand/Collapse all
1. Introduction
2. Design of Experiments
2.1. Deterministic DoE schemes
2.1.1. Full factorial design
2.1.2. Star points
2.1.3. Central composite design
2.1.4. Box-Behnken design
2.1.5. Koshal designs
2.1.6. D-optimal designs
2.1.7. Orthogonal Arrays
2.2. Random and quasi-random DoE schemes
2.2.1. Monte Carlo Simulation
2.2.2. Latin Hypercube Sampling
2.2.2.1. Correlation optimized Latin Hypercube Sampling
2.2.2.2. Space-filling Latin Hypercube Sampling
2.2.3. Space-filling designs
2.2.4. Sobol Sequences
2.3. Deterministic vs. random DoE schemes
2.4. Glossary
3. Metamodeling techniques
3.1. Regression models for scalar outputs
3.1.1. Polynomial regression
3.1.2. Box-Cox transformation
3.1.3. Moving Least Squares approximation
3.1.4. Kriging
3.1.5. Radial basis functions
3.1.6. Support Vector Regression
3.1.7. Sparse Grid Approximation
3.1.8. Genetic Aggregation Response Surface
3.1.9. Neural networks
3.1.9.1. Feedforward neural networks
3.1.9.2. Radial basis function networks
3.1.9.3. Deep Feed Forward Network
3.1.9.3.1. Hyperparameter search
3.1.9.3.2. Model training
3.1.9.3.3. Network layout
3.1.10. Deep Infinite Mixture Gaussian Process (DIM-GP)
3.2. Model quality measures
3.2.1. Residual analysis
3.2.1.1. Residual sum of squares
3.2.1.2. Mean squared error (MSE)
3.2.1.3. Root mean squared error (RMSE)
3.2.1.4. Relative root mean squared error
3.2.1.5. Maximum residual
3.2.1.6. Maximum Relative Residual
3.2.1.7. Relative Maximum Absolute Error
3.2.1.8. Relative Average Absolute Error
3.2.1.9. PRESS residuals for polynomial regression
3.2.1.10. R2 for prediction of polynomial regression
3.2.2. Generalized cross-validation and Akaike"s final prediction error
3.2.3. k-fold cross-validation
3.2.4. Coefficient of Determination (R2)
3.2.5. Coefficient of Prognosis
3.3. Model Sensitivity measures
3.3.1. Variance based sensitivity analysis
3.3.1.1. ANOVA
3.3.1.1.1. The confidence interval of the regression coefficients
3.3.1.1.2. The significance of a regression coefficient
3.3.1.2. Principal Component Analysis
3.3.1.3. Coefficient of Correlation
3.3.1.4. Coefficient of Importance
3.4. MOP principle and competition
3.4.1. MOP residual plot
3.4.2. Local MOP error measures
3.5. Adaptive metamodeling
3.5.1. Adaptive Metamodel of Optimal Prognosis (AMOP)
3.5.2. Genetic Aggregation Response Surface Refinement
3.6. Metamodels Glossary
4. Optimization methods
4.1. Optimization setup
4.2. Single-objective optimization
4.2.1. Accompanying example: optimization of a damped oscillator
4.2.2. Gradient-based methods
4.2.2.1. Non-Linear Programming by Quadratic Lagrangian (NLPQLP)
4.2.2.2. Mixed-Integer Sequantial Quadratic Programming (MISQP)
4.2.2.3. Leapfrog optimizer for constrained minimization (LFOPC)
4.2.2.4. DAKOTA Coliny Solis-Wets
4.2.2.5. DAKOTA CONMIN
4.2.2.6. DAKOTA OPT++ Finite Differences Newton (FDN)
4.2.2.7. DAKOTA OPT++ Quasi Newton (QN)
4.2.2.8. DAKOTA OPT++ Polak-Ribiere Conjugate Gradient (PR)
4.2.3. Pattern search methods
4.2.3.1. Downhill simplex
4.2.3.2. Hooke-Jeeves Pattern Search
4.2.3.3. DAKOTA OPT++ Parallel Direct Search (PDS)
4.2.3.4. DAKOTA Asynchronous Parallel Pattern Search (APPS)
4.2.3.5. DAKOTA Coliny DIRECT
4.2.3.6. DAKOTA NCSU DIRECT
4.2.3.7. DAKOTA Coliny Pattern Search
4.2.4. Response surface based methods
4.2.4.1. Adaptive Response Surface Method (ARSM)
4.2.4.2. Optimization using the Metamodel of Optimal Prognosis
4.2.4.3. Adaptive Metamodel of Optimal Prognosis (AMOP)
4.2.4.4. Adaptive Single-Objective
4.2.4.5. Efficient Global Optimization
4.2.4.6. Probabilistic Inference for Bayesian Optimization
4.2.5. Nature-inspired methods
4.2.5.1. Evolutionary algorithms
4.2.5.2. Darwin algorithm
4.2.5.3. EVOLVE
4.2.5.4. DAKOTA Coliny Evolutionary Algorithm (EA)
4.2.5.5. Covariance Matrix Adaptation
4.2.5.6. Particle Swarm Optimization
4.2.5.7. Stochastic Design Improvement
4.2.5.8. Adaptive Simulated Annealing
4.2.5.8.1. Algorithm
4.2.5.8.2. Acceptance function
4.2.5.8.3. Sampling algorithm
4.2.5.8.4. Cooling schedule
4.2.5.8.5. Stopping criterion
4.2.5.8.6. Re-annealing
4.2.5.8.7. Some comments
4.2.5.9. Differential Evolution
4.2.6. Hybrid methods
4.2.6.1. One-Click Optimization (OCO)
4.3. Multi-objective optimization
4.3.1. Pareto optimization
4.3.1.1. Pareto optimality
4.3.1.2. Analysis of conflicting objectives
4.3.2. Weighted sum
4.3.3. ε-Constraint
4.3.4. Response Surface based methods
4.3.4.1. Adaptive Metamodel of Optimal Prognosis (AMOP)
4.3.4.2. Adaptive Multiple-Objective
4.3.5. Natured inspired methods
4.3.5.1. Evolutionary algorithms
4.3.5.2. Darwin algorithm
4.3.5.3. Particle Swarm Optimization
4.3.5.4. Non-dominated Sorting Genetic Algorithm II
4.3.5.5. Multi-Objective Genetic Algorithms
4.3.6. Hybrid methods
4.3.6.1. One-Click Optimization (OCO)
4.3.7. Performance metrics
4.3.7.1. Number of nondominated points
4.3.7.2. Spread
4.3.7.3. Standard deviation of crowding distance
4.3.7.4. Min/Max of objectives
4.3.7.5. Hypervolume
4.3.7.6. Number of common points
4.3.7.7. Number of new nondominated solutions
4.3.7.8. Number of old dominated solutions n(Q)
4.3.7.9. Consolidation ratio
4.3.7.10. Improvement ratio
4.4. Optimization Glossary