OpenVINO-XAI Python API#

API#

To use functional APIs, use openvino_xai.api

openvino_xai.api

Finctional API.

openvino_xai.api.insert_xai(model: Model, task: Task, explain_method: Method | None = None, target_layer: str | List[str] | None = None, embed_scaling: bool | None = True, **kwargs) Model[source]#

Inserts XAI branch into the given model.

Usage:

model_xai = openvino_xai.insert_xai(model, task=Task.CLASSIFICATION)

Parameters:
  • model (ov.Model | torch.nn.Module) – Original model.

  • task (Task) – Type of the task: CLASSIFICATION or DETECTION.

  • explain_method (Method) – Explain method to use for model explanation.

  • target_layer (str | List[str]) – Target layer(s) (node(s)) name after which the XAI branch will be inserted.

  • embed_scaling (bool) – If set to True, saliency map scale (0 ~ 255) operation is embedded in the model.

Common#

Common parameters and utils

openvino_xai.common

Common parameters and utils.

class openvino_xai.common.Method(value)[source]#

Bases: Enum

Enum representing the different XAI methods:

Contains the following values:

ACTIVATIONMAP - ActivationMap method. RECIPROCAM - ReciproCAM method. VITRECIPROCAM - VITReciproCAM method. DETCLASSPROBABILITYMAP - DetClassProbabilityMap method. RISE - RISE method. AISE - AISE method.

class openvino_xai.common.Task(value)[source]#

Bases: Enum

Enum representing the different task types:

Contains the following values:

CLASSIFICATION - Classification task. DETECTION - Detection task.

openvino_xai.common.has_xai(model: Model | Module) bool[source]#

Function checks if the model contains XAI branch.

Parameters:

model (ov.Model | torch.nn.Module) – Input model for inspect.

Returns:

True is the model has XAI branch and saliency_map output, False otherwise.

openvino_xai.common.scaling(saliency_map: ndarray, cast_to_uint8: bool = True, max_value: int = 255) ndarray[source]#

Scaling saliency maps to [0, max_value] range.

Explanation#

To explain the model (getting saliency maps), use openvino_xai.explanation

Interface for getting explanation.

class openvino_xai.explainer.ExplainMode(value)[source]#

Bases: Enum

Enum describes different explain modes.

Contains the following values:

WHITEBOX - The model is explained in white box mode, i.e. XAI branch is getting inserted into the model graph. BLACKBOX - The model is explained in black box model. AUTO - The model is explained in the white-box mode first, if fails - black-box mode will run.

class openvino_xai.explainer.Explainer(model: ~openvino.runtime.ie_api.Model | str | ~os.PathLike, task: ~openvino_xai.common.parameters.Task, preprocess_fn: ~typing.Callable[[~numpy.ndarray], ~numpy.ndarray] = <openvino_xai.common.utils.IdentityPreprocessFN object>, postprocess_fn: ~typing.Callable[[~typing.Mapping], ~numpy.ndarray] | None = None, explain_mode: ~openvino_xai.explainer.explainer.ExplainMode = ExplainMode.AUTO, explain_method: ~openvino_xai.common.parameters.Method | None = None, target_layer: str | ~typing.List[str] | None = None, embed_scaling: bool | None = True, device_name: str = 'CPU', **kwargs)[source]#

Bases: object

Explainer creates methods and uses them to generate explanations.

Usage:
>>> explainer = Explainer("model.xml", Task.CLASSIFICATION)
>>> explanation = explainer(data)
Parameters:
  • model (ov.Model | str | PathLike) – Original model object, OpenVINO IR file (.xml) or ONNX file (.onnx).

  • task (Task) – Type of the task: CLASSIFICATION or DETECTION.

  • preprocess_fn (Callable[[np.ndarray], np.ndarray]) – Preprocessing function, identity function by default (assume input images are already preprocessed by user).

  • postprocess_fn (Callable[[Mapping], np.ndarray]) – Postprocessing functions, required for black-box.

  • explain_mode (ExplainMode) – Explain mode.

  • explain_method (Method) – Explain method to use for model explanation.

  • target_layer (str | List[str]) – Target layer(s) (node(s)) name after which the XAI branch will be inserted.

  • embed_scaling (bool) – If set to True, saliency map scale (0 ~ 255) operation is embedded in the model.

  • device_name (str) – Device type name.

__call__(data: ndarray, targets: ndarray | List[int | str] | int | str = -1, original_input_image: ndarray | None = None, label_names: List[str] | None = None, output_size: Tuple[int, int] | None = None, scaling: bool = False, resize: bool = True, colormap: bool = True, overlay: bool = False, overlay_weight: float = 0.5, overlay_prediction: bool = True, **kwargs) Explanation[source]#

Call self as a function.

create_method(explain_mode: ExplainMode, task: Task) MethodBase[source]#

Creates XAI method.

Parameters:
  • explain_mode (ExplainMode) – Explain mode.

  • task (Task) – Type of the task: CLASSIFICATION or DETECTION.

explain(data: ndarray, targets: ndarray | List[int | str] | int | str = -1, original_input_image: ndarray | None = None, label_names: List[str] | None = None, output_size: Tuple[int, int] | None = None, scaling: bool = False, resize: bool = True, colormap: bool = True, overlay: bool = False, overlay_weight: float = 0.5, overlay_prediction: bool = True, **kwargs) Explanation[source]#

Interface that generates explanation result.

Parameters:
  • data (np.ndarray) – Input image.

  • targets (np.ndarray | List[int | str] | int | str) – List of custom labels to explain, optional. Can be list of integer indices (int), or list of names (str) from label_names. Defaults to -1, which means all class labels.

  • label_names (List[str] | None) – List of all label names.

  • output_size – Output size used for resize operation.

  • scaling (bool) – If True, scaling saliency map into [0, 255] range (filling the whole range). By default, scaling is embedded into the IR model. Therefore, scaling=False here by default.

  • resize (bool) – If True, resize saliency map to the input image size.

  • colormap (bool) – If True, apply colormap to the grayscale saliency map.

  • overlay (bool) – If True, generate overlay of the saliency map over the input image.

  • overlay_weight (float) – Weight of the saliency map when overlaying the input data with the saliency map.

  • overlay_prediction (bool) – If True, plot model prediction over the overlay.

model_forward(x: ndarray, preprocess: bool = True) Mapping[source]#

Forward pass of the compiled model.

class openvino_xai.explainer.Explanation(saliency_map: ndarray | Dict[int | str, ndarray], targets: ndarray | List[int | str] | int | str, task: Task, label_names: List[str] | None = None, predictions: Dict[int, Prediction] | None = None)[source]#

Bases: object

Explanation selects target saliency maps, holds it and its layout.

Parameters:
  • saliency_map (np.ndarray | Dict[int | str, np.ndarray]) – Raw saliency map, as a numpy array or as a dict.

  • targets (np.ndarray | List[int | str] | int | str) – List of custom labels to explain, optional. Can be list of integer indices (int), or list of names (str) from label_names.

  • task (Task) – Type of the task: CLASSIFICATION or DETECTION.

  • label_names (List[str] | None) – List of all label names.

  • predictions (Dict[int, Prediction] | None) – Per-target model prediction (available only for black-box methods).

plot(targets: ndarray | List[int | str] | None = None, backend: str = 'matplotlib', max_num_plots: int = 24, num_columns: int = 4) None[source]#

Plots saliency maps using the specified backend.

This function plots available saliency maps using the specified backend. Targets to plot can be specified by passing a list of target class indices or names. If a provided class is not available among the saliency maps, it is omitted.

Args:
targets (np.ndarray | List[int | str] | None): A list or array of target class indices or names to plot.

By default, it’s None, and all available saliency maps are plotted.

backend (str): The plotting backend to use. Can be either ‘matplotlib’ (recommended for Jupyter)

or ‘cv’ (recommended for Python scripts). Default is ‘matplotlib’.

max_num_plots (int): Max number of images to plot. Default is 24 to avoid memory issues. num_columns (int): Number of columns in the saliency maps visualization grid for the matplotlib backend.

save(dir_path: Path | str, prefix: str = '', postfix: str = '', confidence_scores: Dict[int, float] | None = None) None[source]#

Dumps saliency map images to the specified directory.

Allows flexibly name the files with the prefix and postfix. {prefix} + target_id + {postfix}.jpg

Also allows to add confidence scores to the file names. {prefix} + target_id + {postfix} + confidence.jpg

save(output_dir) -> aeroplane.jpg save(output_dir, prefix=”image_name_target_”) -> image_name_target_aeroplane.jpg save(output_dir, postfix=”_class_map”) -> aeroplane_class_map.jpg save(

output_dir, prefix=”image_name_”, postfix=”_conf_”, confidence_scores=scores

) -> image_name_aeroplane_conf_0.85.jpg

Parameters:
param dir_path:

The directory path where the saliency maps will be saved.

type dir_path:

Path | str

param prefix:

Optional prefix for the saliency map names. Default is an empty string.

type prefix:

str

param postfix:

Optional postfix for the saliency map names. Default is an empty string.

type postfix:

str

param confidence_scores:

Dict with confidence scores for each class index. Default is None.

type confidence_scores:

Dict[int, float] | None

property saliency_map: Dict[int | str, ndarray]#

Saliency map as a dict {target_id: np.ndarray}.

property shape#

Shape of the saliency map.

property targets#

Explained targets.

class openvino_xai.explainer.Layout(value)[source]#

Bases: Enum

Enum describes different saliency map layouts.

Saliency map can have the following layout:

ONE_MAP_PER_IMAGE_GRAY - BHW - one map per image ONE_MAP_PER_IMAGE_COLOR - BHWC - one map per image, colormapped MULTIPLE_MAPS_PER_IMAGE_GRAY - BNHW - multiple maps per image MULTIPLE_MAPS_PER_IMAGE_COLOR - BNHWC - multiple maps per image, colormapped

class openvino_xai.explainer.Visualizer[source]#

Bases: object

Visualizer implements post-processing for the saliency maps in explanation.

__call__(explanation: Explanation, original_input_image: ndarray | None = None, output_size: Tuple[int, int] | None = None, scaling: bool = False, resize: bool = True, colormap: bool = True, overlay: bool = False, overlay_weight: float = 0.5, overlay_prediction: bool = True) Explanation[source]#

Call self as a function.

visualize(explanation: Explanation, original_input_image: ndarray | None = None, output_size: Tuple[int, int] | None = None, scaling: bool = False, resize: bool = True, colormap: bool = True, overlay: bool = False, overlay_weight: float = 0.5, overlay_prediction: bool = True) Explanation[source]#

Saliency map postprocess method. Applies some op ordering logic, depending on VisualizationParameters. Returns ExplainResult object with processed saliency map, that can have one of Layout layouts.

Parameters:
  • explanation (Explanation) – Explanation result object.

  • original_input_image (np.ndarray) – Input original_input_image.

  • output_size (Tuple[int, int]) – Output size used for resize operation.

  • scaling (bool) – If True, scaling saliency map into [0, 255] range (filling the whole range). By default, scaling is embedded into the IR model. Therefore, scaling=False here by default.

  • resize (bool) – If True, resize saliency map to the input image size.

  • colormap (bool) – If True, apply colormap to the grayscale saliency map.

  • overlay (bool) – If True, generate overlay of the saliency map over the input image.

  • overlay_weight (float) – Weight of the saliency map when overlaying the input data with the saliency map.

  • overlay_prediction (bool) – If True, plot model prediction over the overlay.

openvino_xai.explainer.colormap(saliency_map: ndarray, colormap_type: int = 2) ndarray[source]#

Applies colormap to the saliency map.

openvino_xai.explainer.overlay(saliency_map: ndarray, input_image: ndarray, overlay_weight: float = 0.5, cast_to_uint8: bool = True) ndarray[source]#

Applies overlay of the saliency map with the original image.

openvino_xai.explainer.resize(saliency_map: ndarray, output_size: Tuple[int, int]) ndarray[source]#

Resize saliency map.

Methods#

To access/modify implemented XAI methods, use openvino_xai.methods

XAI algorithms.

class openvino_xai.methods.AISEClassification(model: Model | None = None, *args, **kwargs)[source]#

Bases: AISEBase

AISE for classification models.

postprocess_fn expected to return one container with scores. With batch dimention equals to one.

Parameters:
  • model (ov.Model) – OpenVINO model.

  • postprocess_fn (Callable[[OVDict], np.ndarray]) – Post-processing function that extract scores from IR model output.

  • preprocess_fn (Callable[[np.ndarray], np.ndarray]) – Pre-processing function, identity function by default (assume input images are already preprocessed by user).

  • device_name (str) – Device type name.

  • prepare_model (bool) – Loading (compiling) the model prior to inference.

generate_saliency_map(data: ndarray, target_indices: List[int] | None, preset: Preset = Preset.BALANCE, num_iterations_per_kernel: int | None = None, kernel_widths: List[float] | ndarray | None = None, solver_epsilon: float = 0.1, locally_biased: bool = False, scale_output: bool = True) Dict[int, ndarray][source]#

Generates inference result of the AISE algorithm. Optimized for per class saliency map generation. Not effcient for large number of classes.

Parameters:
  • data (np.ndarray) – Input image.

  • target_indices (List[int]) – List of target indices to explain.

  • preset (Preset) – Speed-Quality preset, defines predefined configurations that manage the speed-quality tradeoff.

  • num_iterations_per_kernel (int) – Number of iterations per kernel, defines compute budget.

  • kernel_widths (List[float] | np.ndarray) – Kernel bandwidths.

  • solver_epsilon (float) – Solver epsilon of DIRECT optimizer.

  • locally_biased (bool) – Locally biased flag of DIRECT optimizer.

  • scale_output (bool) – Whether to scale output or not.

class openvino_xai.methods.AISEDetection(model: Model | None = None, *args, **kwargs)[source]#

Bases: AISEBase

AISE for detection models.

postprocess_fn expected to return three containers: boxes (format: [x1, y1, x2, y2]), scores, labels. With batch dimention equals to one.

Parameters:
  • model (ov.Model) – OpenVINO model.

  • postprocess_fn (Callable[[OVDict], np.ndarray]) – Post-processing function that extract scores from IR model output.

  • preprocess_fn (Callable[[np.ndarray], np.ndarray]) – Pre-processing function, identity function by default (assume input images are already preprocessed by user).

  • device_name (str) – Device type name.

  • prepare_model (bool) – Loading (compiling) the model prior to inference.

generate_saliency_map(data: ndarray, target_indices: List[int] | None, preset: Preset = Preset.BALANCE, num_iterations_per_kernel: int | None = None, divisors: List[float] | ndarray | None = None, solver_epsilon: float = 0.05, locally_biased: bool = False, scale_output: bool = True) Dict[int, ndarray][source]#

Generates inference result of the AISE algorithm. Optimized for per class saliency map generation. Not effcient for large number of classes.

Parameters:
  • data (np.ndarray) – Input image.

  • target_indices (List[int]) – List of target indices to explain.

  • preset (Preset) – Speed-Quality preset, defines predefined configurations that manage the speed-quality tradeoff.

  • num_iterations_per_kernel (int) – Number of iterations per kernel, defines compute budget.

  • divisors (List[float] | np.ndarray) – List of dividors, used to derive kernel widths in an adaptive manner.

  • solver_epsilon (float) – Solver epsilon of DIRECT optimizer.

  • locally_biased (bool) – Locally biased flag of DIRECT optimizer.

  • scale_output (bool) – Whether to scale output or not.

class openvino_xai.methods.ActivationMap(model: Model | Module | None = None, *args, **kwargs)[source]#

Bases: WhiteBoxMethod

Implements ActivationMap.

Parameters:
  • model (ov.Model) – OpenVINO model.

  • preprocess_fn (Callable[[np.ndarray], np.ndarray]) – Preprocessing function, identity function by default (assume input images are already preprocessed by user).

  • target_layer (str) – Target layer (node) name after which the XAI branch will be inserted.

  • embed_scaling (bool) – Whether to scale output or not.

  • device_name (str) – Device type name.

  • prepare_model (bool) – Loading (compiling) the model prior to inference.

generate_xai_branch() Node[source]#

Implements ActivationMap XAI algorithm.

class openvino_xai.methods.DetClassProbabilityMap(model: Model | None = None, *args, **kwargs)[source]#

Bases: WhiteBoxMethod

Implements DetClassProbabilityMap, used for single-stage detectors, e.g. SSD, YOLOX or ATSS.

Parameters:
  • model (ov.Model) – OpenVINO model.

  • preprocess_fn (Callable[[np.ndarray], np.ndarray]) – Preprocessing function, identity function by default (assume input images are already preprocessed by user).

  • target_layer (str) – Target layer (node) name after which the XAI branch will be inserted.

  • embed_scaling (bool) – Whether to scale output or not.

  • device_name (str) – Device type name.

  • num_anchors (List[int]) – Number of anchors per scale.

  • saliency_map_size (Tuple[int, int] | List[int]) – Size of the output saliency map.

  • prepare_model (bool) – Loading (compiling) the model prior to inference.

generate_xai_branch() Node[source]#

Implements DetClassProbabilityMap XAI algorithm.

class openvino_xai.methods.FeatureMapPerturbationBase(model: Model | None = None, *args, **kwargs)[source]#

Bases: WhiteBoxMethod

Base class for FeatureMapPerturbation-based methods.

Parameters:
  • model (ov.Model) – OpenVINO model.

  • preprocess_fn (Callable[[np.ndarray], np.ndarray]) – Preprocessing function, identity function by default (assume input images are already preprocessed by user).

  • target_layer (str) – Target layer (node) name after which the XAI branch will be inserted.

  • embed_scaling (bool) – Whether to scale output or not.

  • device_name (str) – Device type name.

generate_xai_branch() Node[source]#

Implements FeatureMapPerturbation-based XAI method.

class openvino_xai.methods.RISE(model: Model | None = None, *args, **kwargs)[source]#

Bases: BlackBoxXAIMethod

RISE explains classification models in black-box mode using ‘RISE: Randomized Input Sampling for Explanation of Black-box Models’ paper (https://arxiv.org/abs/1806.07421).

postprocess_fn expected to return one container with scores. With batch dimention equals to one.

Parameters:
  • model (ov.Model) – OpenVINO model.

  • postprocess_fn (Callable[[Mapping], np.ndarray]) – Post-processing function that extract scores from IR model output.

  • preprocess_fn (Callable[[np.ndarray], np.ndarray]) – Pre-processing function, identity function by default (assume input images are already preprocessed by user).

  • device_name (str) – Device type name.

  • prepare_model (bool) – Loading (compiling) the model prior to inference.

generate_saliency_map(data: ndarray, target_indices: List[int] | None = None, preset: Preset = Preset.BALANCE, num_masks: int | None = None, num_cells: int | None = None, prob: float = 0.5, seed: int = 0, scale_output: bool = True) ndarray | Dict[int, ndarray][source]#

Generates inference result of the RISE algorithm.

Parameters:
  • data (np.ndarray) – Input image.

  • target_indices (List[int]) – List of target indices to explain.

  • preset (Preset) – Speed-Quality preset, defines predefined configurations that manage speed-quality tradeoff.

  • num_masks (int) – Number of generated masks to aggregate.

  • num_cells (int) – Number of cells for low-dimensional RISE random mask that later will be up-scaled to the model input size.

  • prob (float) – With prob p, a low-res cell is set to 1; otherwise, it’s 0. Default: 0.5.

  • seed (int) – Seed for random mask generation.

  • scale_output (bool) – Whether to scale output or not.

class openvino_xai.methods.ReciproCAM(model: Model | Module | None = None, *args, **kwargs)[source]#

Bases: FeatureMapPerturbationBase

Implements Recipro-CAM for CNN models.

Parameters:
  • model (ov.Model) – OpenVINO model.

  • preprocess_fn (Callable[[np.ndarray], np.ndarray]) – Preprocessing function, identity function by default (assume input images are already preprocessed by user).

  • target_layer (str) – Target layer (node) name after which the XAI branch will be inserted.

  • embed_scaling (bool) – Whether to scale output or not.

  • device_name (str) – Device type name.

  • prepare_model (bool) – Loading (compiling) the model prior to inference.

class openvino_xai.methods.ViTReciproCAM(model: Model | Module | None = None, *args, **kwargs)[source]#

Bases: FeatureMapPerturbationBase

Implements ViTRecipro-CAM for transformer models.

Parameters:
  • model (ov.Model) – OpenVINO model.

  • preprocess_fn (Callable[[np.ndarray], np.ndarray]) – Preprocessing function, identity function by default (assume input images are already preprocessed by user).

  • target_layer (str) – Target layer (node) name after which the XAI branch will be inserted.

  • embed_scaling (bool) – Whether to scale output or not.

  • device_name (str) – Device type name.

  • use_gaussian (bool) – Whether to use Gaussian for mask generation or not.

  • cls_token (bool) – Whether to use cls token for mosaic prediction or not.

  • final_norm (bool) – Whether the model has normalization after the last transformer block.

  • k (int) – Count of the transformer block (from head) before which XAI branch will be inserted, 1-indexed.

  • prepare_model (bool) – Loading (compiling) the model prior to inference.

class openvino_xai.methods.WhiteBoxMethod(model: Model | None = None, *args, **kwargs)[source]#

Bases: MethodBase[Model, CompiledModel]

Base class for white-box XAI methods.

Parameters:
  • model (ov.Model) – OpenVINO model.

  • preprocess_fn (Callable[[np.ndarray], np.ndarray]) – Preprocessing function, identity function by default (assume input images are already preprocessed by user).

  • embed_scaling (bool) – Whether to scale output or not.

  • device_name (str) – Device type name.

generate_saliency_map(data: ndarray, *args, **kwargs) ndarray[source]#

Saliency map generation. White-box implementation.

abstract generate_xai_branch()[source]#

Implements specific XAI algorithm.

prepare_model(load_model: bool = True) Model[source]#

Model preparation steps.