9.3. Deep Pixel

9.3.1. Introduction

Traditional images save simply the horizontal and vertical size of the image and then a color for each pixel. Metadata can be stored with each pixel that provides a fuller context for the pixel. In particular, the metadata can describe what entities own that pixel and what entities are responsible for the color, and a numerical value and units that determined the pixel color. These pixels with metadata define a context for each of the colored pixels and enrich the usability of the resulting image in subsequent viewers.

9.3.2. Deep Pixel Image Generation

When EnSight renders a scene, the visible pixels are only part of the information generated. In addition to the RGB values, there is an image that describes which part each pixel belongs to (pick buffer) and if the part were colored by a variable, the actual variable value used in the color lookup at each pixel (variable buffer) may also be captured. EnSight images that contain pick buffer and variable buffer information are referred to as enhanced or deep pixel images.

The ensight.render() Python call can be used to generate an enve.image object that contains these additional image layers. The enve.image object can save this information to disk only in TIFF image format.


Note:  Reading of deep pixel images by the enve.image object is not yet supported. APIs exist on the enve image object to allow for the caller to access these additional channels of data. The variabledata, pickdata and metadata attributes will return non-None/empty values for deep pixel images.


9.3.3. TIFF Image Format for Deep Pixel Images

Deep image TIFF images are saved as a single, three-page TIFF image. The first page is the traditional RGB color image saved for non-deep pixel data sources. The ImageDescription tag in this first page will have unique content. It will be a JSON formatted UTF8 encoded string that contains part, variable and unit information for the other pages. An example might look like this:

{ 
    "parts": [ 
        {  "name": "fluid", 
           "id": "1", 
           "colorby_var": "3.0" 
        }, 
        {  "name": "wall", 
           "id": "2", 
           "colorby_var": "3.0" 
        } 
    ], 
    "variables": [ 
        {  "name": "PRESSURE_Relative", 
           "id": "2", 
           "pal_id": "3", 
           "unit_dims": "M/LTT", 
           "unit_system_to_name": "SI", 
           "unit_label": "Pa" 
        }, 
        {  "name": "MASS_FLUX", 
           "id": "3", 
           "pal_id": "0", 
           "unit_dims": "M/LLT", 
           "unit_system_to_name": "SI", 
           "unit_label": "kg/(m²·s)" 
        }  
    ] 
}

The second page (pick buffer) is an RGBA image with 8 bits per pixel and 4 samples per pixel. The R and G samples can be combined into a single 16bit unsigned integer using (R + 256*G). This value represents the part ‘id’ number in the JSON information block. This can be used to map the pixel back to the source part name in the EnSight scene. The third page (variable buffer) is a 32bit floating point image with 32bits and 1 sample per pixel. The pixel value is the actual variable value computed by EnSight for that point on the part. To best interpret this page, one first looks up the part id using the pick buffer and finding the entry in the parts list in the JSON metadata. One next examines the colorby_var field of that part and matches the integer portion of that field to the pal_id field of an entity in the variables list in the JSON metadata. The variable object includes the type of the variable, the dimensionality of the variable (https://nexusdemo.ensight.com/docs/python/html/ENS_UNITSSchema.html) and provides a label that can be used when displaying the data. It also provides the unit system name.