5.5.1. Adaptive Sampling (ADSAP)

Many variants have been developed which shall on one hand reduce the estimator variance most efficiently, on the other hand be robust and versatile for a large range of applications. One such Importance Sampling technique is the Adaptive Sampling by Bucher, 1988 (see Bucher 2009).

The procedure involves several simulation runs. For the first step, the sampling density may have larger scatter than originally defined for the input parameters. The samples which fell into the failure domain in the first run are statistically evaluated: The result serves to define a multi-dimensional normal type simulation density hY(Y) for the subsequent importance sampling run:

(5–20)

(5–21)

A third run (that is, a second adaptation) should be performed to prove stability of the result.

A theoretically 'ideal' sampling density which would reduce the estimator variance to zero (Rubinstein 1981) is approximated in the second moment sense by this procedure. The Adpative Sampling technique has a large range of applications, including not-differentiable and noisy limit state functions. Since the effort required to compute a covariance matrix increases quadratically with dimension n, it may become ineffective in high dimensions. If the user has an idea about the expected reliability index, it is recommended to use this as a scaling factor of the sampling standard deviation in the first run.

Example: Parabola

The properties of the Adaptive Sampling method shall be illuminated by a simple example. The random variables are

and the limit state is a parabola:

The limit state function is smoothly non-linear, the design point is not unique. For the Adaptive Sampling procedure in optiSLang, three runs with 1000 samples each were performed. The standard deviations were scaled by a factor of three in the first run. Figure 5.3: Parabola Example: Anthill Plots of Subsequent Adaptive Sampling Runs shows how the sampling densities are adapted in the subsequent runs.