Monte Carlo Calculation Properties

This page describes the Monte Carlo Calculation Properties to set when creating an inverse simulation.

The Monte Carlo algorithm is a randomized algorithm that allows you to perform probabilistic simulations.

This algorithm is reliable, efficient and suits a lot of configurations but can, according to your configuration, take a certain amount of time to compute.
Note: In a Monte Carlo inverse simulation, if the absorption value of a BRDF is negative, it is considered as a null value.​

Optimized Propagation

Note: The Optimized propagation algorithm is only compatible with the Radiance sensors.

The Optimized Propagation algorithms consist in sending rays from each pixel of the sensor until one of the stopping criteria is reached.

  • None: the same number of passes is used for each pixel of the image (current and default algorithm).

    This algorithm may generate unbalanced results. Some pixels may have a good signal-to-noise ratio (SNR) whereas some other pixels may show too much noise.

  • Relative and Absolute: the algorithm adapts the number of passes per pixel to send the optimal number of rays according to the signal each pixel needs. As a result, the SNR is adequate in areas where pixels need more rays thus giving a balanced image.

    These two modes are based on the same principle, however the method of calculation is slightly different.

    • In Relative, the pixel's standard deviation is compared with a threshold value (error margin expressed in percentage) defined by the user. This value determines the error margin (standard deviation) tolerated. Rays will be launched until the standard deviation of the pixel is lower than the defined threshold value. All the values of the map are then known with the same "relative" precision.

      The standard deviation is normalized (expressed as an average) and compared to the threshold value (percentage).

      The pixel's standard deviation is computed on each pixel according to the following formula:

      σN: Estimate of the standard deviation relative to the number of rays (N).

      θ: average signal of the map.

      σr: User defined standard deviation.

      The greater N is (the more rays are sent), the more the σ (standard deviation) converges to the threshold value (σA). Here the standard deviation is normalized by an average (θ).

    • In Absolute, the pixel's value is simply compared with a fixed threshold value (photometric value) defined by the user. This photometric unit determines the error margin (standard deviation) tolerated for each pixel of the map. Rays will be launched until the standard deviation of the pixel is lower than the defined threshold value. All the values are thereby known with the same precision.

      The standard deviation is simply compared to the threshold value.

      The pixel's standard deviation is computed on each pixel according to the following formula:

      σN: Estimate of the standard deviation relative to the number of rays (N).

      σA: User defined standard deviation.

      The greater N is (the more rays are sent), the more the σ (standard deviation) converges to the threshold value (σA).

Number of standard passes before optimized passes: corresponds to the minimum number of passes without pass optimization (standard pass : all pixels emit rays. Optimized pass : pixels with standard deviation higher than defined threshold emit rays).

Dispersion

With this parameter, you can activate the dispersion calculation. In optical systems in which the dispersion phenomena can be neglected, the colorimetric noise is canceled by deactivating this parameter.

Note: This parameter should not be used with simulations involving diffusive materials.

For more details, refer to Dispersion.

Splitting

This option is useful when designing tail lamps.

Splitting allows you to split each propagated ray into several paths at their first impact after leaving the observer point. Further impacts along the split paths do not provide further path splitting. This feature is primarily intended to provide a faster noise reduction on scenes with optical polished surface as the first surface state seen from the observer. An observer watching a car rear lamp is a typical example of such a scene.

Note: The split is only done to the first impact. Two rays are split on an optical polished surface. The 2 rays are weighted using Fresnel's law. On other surfaces there may be more or less split rays depending on the surface model.

We are considering either the transmitted or the reflected ray (only one of them, we pick one each time). The choice (R or T) is achieved using Monte Carlo: the probability for reflection is the Fresnel coefficient for reflection. So depending on the generated random number, the ray will be either reflected or transmitted.

Close to the normal incidence, the reflection probability is around 4%, which is low. This low probability makes that when we want to see the reflection of the environment, we observe a lot of noise. The splitting algorithm removes this noise by computing the first interaction without using Monte Carlo.

Number of gathering rays per source

In inverse simulations, each ray is propagated from the observer point through the map and follows a random path through the system.

There is often a very small probability for a ray to hit a light source on its own. To increase this probability, new rays are generated at each impact on diffuse surfaces. These rays are called shadow rays. They are targeted to each light source in the system and the program check whether a direct hit on the sources is possible or not. If not, nothing happens. If the program finds a hit, it computes the corresponding radiance to store in the map.

The Number of gathering rays per source parameter pilots the number of shadow rays to target at each source.



Maximum Gathering Error

With this parameter, you can reduce the simulation time for scenes with large number of sources and where each source contributes to illuminate a small area of the scene. The value set here defines the level below which a source can be neglected. For instance a value of 10 means that all sources contributing less than 10% to the illumination of all sources are not taken in consideration. 0, the default value means that no approximation will be done.

Note: You must take some precautions by using layer operations tool of the Virtual Photometric Lab. For instance if maximum gathering error is defined at 1% for a simulation and if the flux of a source is increased 10 times with the layer operations tool, it means that maximum gathering error is now 10% for this source.

Fast Transmission Gathering

Fast Transmission Gathering accelerates the simulation by neglecting the light refraction that occurs when the light is being transmitted though a transparent surface.

This option is useful when transparent objects of a scene are flat enough to neglect the refraction effect on the direction of a ray (windows, windshield etc).

Note: Fast Transmission Gathering does not apply to 3D Texture, Polarization Plate and Speos Component Import.

With Fast transmission gathering activated:

  • The result is right only for flat glass (parallel faces).

  • The convergence of the results is faster.

  • The effect of the refraction on the direction is not taken into account.

  • Dispersion is allowed.

5 minutes - 25 passes 5 minutes - 35 passes

Propagation Error Analysis

The Propagation error analysis option helps you identify the root causes and decrease the percentage of errors, by generating a *.lpf file out of the simulation, dedicated to store only the rays in error. Through this file you can display the rays in error in the 3D view and the type of error so that you can correct your optical system.

Intermediate Save Frequency

During an inverse simulation, intermediate results can be saved. Setting an intermediate save frequency is useful when computing long simulations and wanting to check intermediate results.

Setting this option to 0 means that the result is saved only at the end of the simulation. If the simulation is stopped without finishing the current pass, no result is available.

Note: A reduced number of save operations naturally increases the simulation performance.

In the case of high sensor sampling the save operation can take the half of the simulation time when automatic save frequency is set to 1.