Model Evaluation Report Metrics
Learn about the structure of your Model Evaluation Report (MER) and the key metrics that can assist you in analyzing the performance of a model.
Dataset
The Dataset section lists all the training data used to build your model, the training data distribution between training and test subsets that was performed during training and the available postprocessings.
Unless the user defined data subset allocation (using PySimAI), the Ansys SimAI platform automatically uses 90% of your simulations for training and 10% for testing, picked randomly:
- The training subset is composed of all the simulations learned by the AI model. The error introduced on the training subset is necessary to allow the AI model to perform well on new data. This is called genericity: it can be seen as the coherence of your reference simulations between each other.
- The test subset is a dedicated sample of data used to assess the final performance of the model. It has never been learned by the model and is the real estimator of its genericity.
Global Coefficients
Metrics of the error on the test subset for the coefficient predictions.
The performance of your model is evaluated based on the mean and the standard deviation of the important metrics calculated on your test subset.
The following two metrics should be as low as possible:- L1 norm: Mean of the absolute value of the errors between the target global coefficients and the corresponding estimations.
- Relative L1 norm: Mean of the absolute percentage error between the target global coefficients and the corresponding estimations.
Trend comparison plots / trend order plots.
Squared correlation coefficient - R2: Measures the variance proportion between the target global coefficients and the corresponding estimations. It is used to assess the relationship between the model prediction and actual solver values.
R2 equals 1 when the estimations are identical to the targets, indicating a strong predictive relationship.
R2 equals 0 when the estimations are all equal to the average of the target, indicating that the model does not explain the variance properly.
Kendall's tau correlation coefficient: Measures the strength and direction of the ordinal association between two ranked variables (here between the target global coefficients and the corresponding estimations).
Kendall’s tau equals 1 when the ranking of the estimations is the same as that of the targets. Conversely, it equals -1 if a ranking is the reverse of the other. The percentage of correctly ordered pairs of values among all possible pairs is (0.5 × (Kendall’s tau + 1)) × 100. It is evaluated only on the test cases.
- Confidence score for the data in the test subset. The confidence score can be High or Low. For more information, see Confidence Score.
Surface
- Metrics of the error in the surface predictions computed on the test subset.
- Surface contours comparison plots for the variables selected as "Model Output" during model configuration.
- Evolution plots for the defined global coefficients.
Volume
- Metrics of the error in the volume predictions computed on the test subset.
- Cut-planes with contours comparison plots for the variables selected as "Model Output" during model configuration.