20.2.5. Determining the Error of the Approximation

After obtaining the coefficients of the expansions, we need to evaluate the accuracy of the current approximation to make sure that it meets the requirements of the user. To accomplish this, a few more runs of the model are required to allow comparison of the model results with the approximation results. First we define the deviation of the expansion for one of the model output variables to be:

(20–16)

where is the model evaluation for a set of values and the right-hand summation is the polynomial approximation of for the same values. The error of approximation is defined as the product of the square of the deviation and the joint probability density function of uncertain parameters evaluated at the collocation points.

(20–17)

In order to estimate the error of approximation, we must use collocation points that were not used previously in the solution of the problem. Here again we want points that represent high probabilities, so we need to use a polynomial of a different order than the one used in the solution of the output expansion coefficients. We choose to obtain the points for the error estimation from the key polynomial of the next higher order from the one used in the solution. The main reason for this choice is to accommodate a software system that is designed to iteratively reduce the error by extending the order of the polynomial expansions as needed. In other words, if we fail the error-test, we will need the results of running collocation points that correspond to the next order anyway. Therefore, if we are going to have to run the model, we might as well run it at points where the results can be re -used in case the error is not acceptable. To estimate the error, we use L collocation points, where should be greater than the collocation points used in the solution in order to adequately test the approximation over the distribution. To this end, we define somewhat arbitrarily as equal to the number of original collocation points plus the number of input parameters .

To test against our error criteria we accumulate the error for each output variable over the new collocation points. Then the sum-square-root (SSR) error is calculate as the following:

(20–18)

and the relative sum-square-root (RSSR) error is:

(20–19)

where is the expected value of , which in most cases will be equal to the first coefficient of the expansion, . Notice that the joint probability density function at the anchor point is used to normalize the SSR calculation. Since the SSR is usually dependent on the magnitude of the expected value, the RSSR is a more useful measure of the error. The degree of accuracy required will be specific to a problem and most likely will be specified by the user in the form of absolute and relative tolerances. Such tolerances may be different for different output variables.