20.1. Reducing the Dimensionality of the System through Polynomial Chaos Expansion

As mentioned above, one of the first obstacles encountered in performing uncertainty analysis on an arbitrary model is the dimensionality of the problem that arises from the number of uncertain parameters in the system. For example, we consider the evaluation of the expected value of a single model response in a multivariate system. The expected value is given as

(20–2)

where is a set of uncertain parameters. Although the calculation is in the form of a one-dimensional integration, the lack of knowledge of the density function makes it impossible to solve. However, the one-dimensional integral can be expressed in a different form that is statistically equivalent:

(20–3)

where is the joint density function of uncertain parameters . Unfortunately, the construction of the N-dimensional joint density function requires a prohibitively large number of samples. Furthermore, the numerical evaluation of the integral for a prescribed level of accuracy demands that the number of samples increase exponentially with the dimension N. To avoid this inherent dimensionality problem, we turn to principle-component techniques to reduce the dimension of the data through the implementation of polynomial chaos expansions.

Using polynomial chaos expansion, we can approximate the random parameters as summations in the form of

(20–4)

where denotes the distributions of , and is a vector of independent standard distributions that mimic the general behavior of . For example, a "standard" distribution might be a Gaussian distribution if behaves like a normal random variable. is the derived multidimensional orthogonal polynomial functional (for example, Hermite polynomial), and is the expansion coefficient to be determined by the characteristic values of the probability density function of . The transformation given in Equation 20–4 uses a set of orthogonal polynomials to span the entire response space and allows projection of the n -dimensional data onto one-dimensional subspaces.

Next, before we can perform the change of variable for the integration parameter , we have to transform the multivariate probability density function . A general form of orthogonal expansion for an arbitrary probability density function is given as

(20–5)

where is the key probability density function, is an orthogonal polynomial derived from , and are weighting coefficients. By key probability density function, we mean that the function should be either a "standard" probability density distribution (for example, Gaussian) or a combination of such distribution functions, where the properties are well known and the function has a similar shape to that of the original. With the projection of into the space of the independent random variables (given by Equation 20–4 Equation 20–4 ), the key probability density function can be transformed as

(20–6)

Similarly, the derived orthogonal polynomial can be recast in terms of the independent random variables as

(20–7)

where is the transformed set of orthogonal polynomials and are the new weighting coefficients. Substituting into Equation 20–5 , the probability density function can be represented in terms of as follows

(20–8)

Furthermore, the model response can also be transformed into a summation of orthogonal polynomials as follows:

(20–9)

where is an algebraic function of the model-specific orthogonal polynomials , and is another set of weighting coefficients.

With Equation 20–8 and Equation 20–9 , both the model response and its probability density function are transformed into the space of . By careful selection of the type of orthogonal polynomials used in the transformations, the original integration given in Equation 20–3 may be represented by

(20–10)

where is an algebraic function of the model-specific orthogonal polynomials , based on of Equation 20–9 and of Equation 20–8 , and are the combined weighting coefficients.

Thus, through polynomial chaos expansion techniques, we have successfully transformed the problem from one of solving the multi-dimensional integration in Equation 20–3 to solving multiple one-dimensional integrations, which can be addressed through collocation method discussed in the next section.

For the approximations that allow us to get to Equation 20–10 to hold, the transformation of the joint probability density function , as well as the transformation of the response variable, must be based on orthogonal polynomials that are derived specifically from the probability density function of the fundamental variables . The polynomial chaos expansion has the following properties:

  • Any square-integrable random variable can be approximated as closely as desired by a polynomial chaos expansion

  • The polynomial chaos expansion is convergent in the mean-square sense

  • The set of orthogonal polynomials is unique given the probability density function

  • The polynomial chaos expansion is unique in representing the random variable

For a given "standard" distribution function, there is a corresponding orthogonal expansion that meets these criteria. The correspondences between several important and common cases are given in Table 20.1: Summary of General Orthogonal Expansions . In general, problem-specific orthogonal polynomials can be derived by algorithms such as ORTHPOL [160], [161].

Table 20.1: Summary of General Orthogonal Expansions

Key Probability Density Function

Polynomial for Orthogonal Expansion

Support Range

Gaussian distribution

Hermite polynomials

(-8, +8)

Gamma distribution

Laguerre polynomials

(0, +8)

Beta/ uniform distribution

Jacobi/ Legendre polynomial

Bounded, such as (0, 1)