otx.algorithms.visual_prompting.adapters.openvino.model_wrappers.openvino_models#

Openvino Model Wrappers of OTX Visual Prompting.

Classes

Decoder(model_adapter[, configuration, preload])

Decoder class for visual prompting of openvino model wrapper.

ImageEncoder(inference_adapter[, ...])

Image encoder class for visual prompting of openvino model wrapper.

class otx.algorithms.visual_prompting.adapters.openvino.model_wrappers.openvino_models.Decoder(model_adapter: InferenceAdapter, configuration: dict | None = None, preload: bool = False)[source]#

Bases: SegmentationModel

Decoder class for visual prompting of openvino model wrapper.

Image model constructor

It extends the Model constructor.

Parameters:
  • inference_adapter (InferenceAdapter) – allows working with the specified executor

  • configuration (dict, optional) – it contains values for parameters accepted by specific wrapper (confidence_threshold, labels etc.) which are set as data attributes

  • preload (bool, optional) – a flag whether the model is loaded to device while initialization. If preload=False, the model must be loaded via load method before inference

Raises:

WrapperError – if the wrapper failed to define appropriate inputs for images

classmethod parameters()[source]#

Defines the description and type of configurable data parameters for the wrapper.

See types.py to find available types of the data parameter. For each parameter the type, default value and description must be provided.

The example of possible data parameter:
‘confidence_threshold’: NumericalValue(

default_value=0.5, description=”Threshold value for detection box confidence”

)

The method must be implemented in each specific inherited wrapper.

Returns:

  • the dictionary with defined wrapper data parameters

postprocess(outputs: Dict[str, ndarray], meta: Dict[str, Any]) Tuple[ndarray, ndarray][source]#

Postprocess to convert soft prediction to hard prediction.

Parameters:
  • outputs (Dict[str, np.ndarray]) – The output of the model.

  • meta (Dict[str, Any]) – Contain label and original size.

Returns:

The hard prediction. soft_prediction (np.ndarray): The soft prediction.

Return type:

hard_prediction (np.ndarray)

preprocess(inputs: Dict[str, Any], meta: Dict[str, Any]) List[Dict[str, Any]][source]#

Preprocess prompts.

class otx.algorithms.visual_prompting.adapters.openvino.model_wrappers.openvino_models.ImageEncoder(inference_adapter, configuration=None, preload=False)[source]#

Bases: ImageModel

Image encoder class for visual prompting of openvino model wrapper.

Image model constructor

It extends the Model constructor.

Parameters:
  • inference_adapter (InferenceAdapter) – allows working with the specified executor

  • configuration (dict, optional) – it contains values for parameters accepted by specific wrapper (confidence_threshold, labels etc.) which are set as data attributes

  • preload (bool, optional) – a flag whether the model is loaded to device while initialization. If preload=False, the model must be loaded via load method before inference

Raises:

WrapperError – if the wrapper failed to define appropriate inputs for images

classmethod parameters() Dict[str, Any][source]#

Defines the description and type of configurable data parameters for the wrapper.

See types.py to find available types of the data parameter. For each parameter the type, default value and description must be provided.

The example of possible data parameter:
‘confidence_threshold’: NumericalValue(

default_value=0.5, description=”Threshold value for detection box confidence”

)

The method must be implemented in each specific inherited wrapper.

Returns:

  • the dictionary with defined wrapper data parameters

preprocess(inputs: ndarray, extra_processing: bool = False) Tuple[Dict[str, ndarray], Dict[str, Any]][source]#

Update meta for image encoder.