Adaptive Sampling

This node performs Adaptive Sampling to compute the probability of failure.

The method is a variant of Importance Sampling. Therein, a different joint probability distribution than the one specified by the parameters is used to generate the pseudo-random design point vectors. Similar to Monte Carlo, probability of failure is computed as the ratio of the number of samples associated with failure events to the total number of samples. The difference between the distribution function which is used for sampling in the simulation, and the actual distribution function of the design space is compensated for by associating weight factors to each failure event. The idea is to use a sampling simulation distribution function which optimally approximates the conditional distribution within the failure domain. Thus the variance of the estimator is reduced.

Adaptive Sampling starts with a pilot simulation with a larger scatter than defined in the parameter table. The samples in the failure domain are statistically evaluated. The results will define the sampling density for the subsequent simulation. For example, the mean value is shifted towards or into the failure domain. Also the sampling covariance matrix is adapted accordingly. The adaptation has to be repeated several times to prove stability of the results. With the prescribed accuracy setting, the algorithm will automatically determine the necessary sample size and number of steps within a given computational budget, in order to reach a given accuracy by means of standard deviation of the estimated probability of failure.

Adaptive Sampling is specifically designed for small probabilities of failure. It does not require smoothness, convexity or differentiability of the limit state function. It can efficiently treat problems where the distribution function of failure events can be represented by a uni-modal function, for example, for cases where only a single region contributes significantly to the probability of failure. For two dominant regions the statistical error remains large, although the results may be correct.

Further information about methods of reliability analysis used in optiSLang can be found here.

Initialization Options

To access the options shown in the following table, double-click the Adaptive Sampling system on the Scenery pane and switch to the Adaptive Sampling tab.

OptionDescription
Prescribed sample sizeIf selected, a fixed number of samples are computed.
Prescribed accuracyIf selected, the number of samples are automatically increased until a given accuracy is reached.
Prescribed sample size
Number of stepsSets the number of sampling runs (including the first run), for example, number of adaptations plus one.
Number of samples per stepSets the number of random samples per adaptation step.
Prescribed accuracy
Desired accuracy (C.O.V.)Terminates the algorithm as soon as the coefficient of variation (C.O.V.) of the estimator for the probability of failure falls below this threshold.
Maximum number of stepsSets the maximum number of sampling runs (including the first run), for example, number of adaptations plus one.
Maximum number of samples per stepSets the maximum allowed number of samples for each adaptation step. This represents an upper bound that limits the computing time if the desired accuracy cannot be reached.
Number of samples per incrementSets the amount of incremental increase of the sample size. After each increment the termination criteria is checked.
Common
Initial scaling factorSets the scaling factor of the samples during the first step. It scales the standard deviation of the initial sampling density in standard normal space. This value can be adjusted to the expected reliability index (the smaller the probability of failure, the larger the scaling factor).
Adapt sampling densityIf selected, the covariance of the sampling density is adapted after each step. This requires a sufficient number of samples obtained in the failure region. In the other case, the covariance of the sampling density is kept constant during the adaptation process. Then only the center point is adapted.

Additional Options

To access the options shown in the following table, in any tab, click Show additional options.

OptionDescription
Limit maximum in parallel

Controls the resource usage of nodes in the system.

When the check box is cleared (default), a value is chosen to ensure the best possible utilization of the child nodes. When the check box is selected, set the value manually to specify how many designs are sent to child nodes, limiting the maximum degree of parallelism for all children. Ansys recommends keeping the check box clear.

Auto-save behavior

Select one of the following options:

  • No auto-save

  • Actor execution finished

  • Every n-th finished design (then select the number of designs from the text field)

  • Iteration finished (for optimization, reliability)

The project, including the database, is auto-saved (depending on defined interval) after calculating this node/system (either when the calculation succeeds or fails).

By default, all parametric and algorithm systems have Every nth finished design 1 design(s) selected, all other nodes have No auto-save selected.