otx.algorithms.detection.adapters.openvino.task#
Openvino Task of Detection.
Classes
|
BaseInferencerWithConverter class in OpenVINO task. |
|
Inferencer implementation for OTXDetection using OpenVINO backend. |
|
Task implementation for OTXDetection using OpenVINO backend. |
|
Mask Inferencer implementation for OTXDetection using OpenVINO backend. |
|
Rotated Rect Inferencer implementation for OTXDetection using OpenVINO backend. |
|
Wrapper for OpenVINO Tiling. |
- class otx.algorithms.detection.adapters.openvino.task.BaseInferencerWithConverter(configuration: dict, model: Model, converter: IPredictionToAnnotationConverter)[source]#
Bases:
IInferencer
BaseInferencerWithConverter class in OpenVINO task.
- enqueue_prediction(image: ndarray, id: int, result_handler: Any) None [source]#
Runs async inference.
- forward(image: Dict[str, ndarray]) Dict[str, ndarray] [source]#
Forward function of OpenVINO Detection Inferencer.
- class otx.algorithms.detection.adapters.openvino.task.OpenVINODetectionInferencer(hparams: DetectionConfig, label_schema: LabelSchemaEntity, model_file: str | bytes, weight_file: str | bytes | None = None, device: str = 'CPU', num_requests: int = 1, model_configuration: Dict[str, Any] = {})[source]#
Bases:
BaseInferencerWithConverter
Inferencer implementation for OTXDetection using OpenVINO backend.
Initialize for OpenVINODetectionInferencer.
- Parameters:
hparams – Hyper parameters that the model should use.
label_schema – LabelSchemaEntity that was used during model training.
model_file – Path OpenVINO IR model definition file.
weight_file – Path OpenVINO IR model weights file.
device – Device to run inference on, such as CPU, GPU or MYRIAD. Defaults to “CPU”.
num_requests – Maximum number of requests that the inferencer can make. Defaults to 1.
- class otx.algorithms.detection.adapters.openvino.task.OpenVINODetectionTask(task_environment: TaskEnvironment)[source]#
Bases:
IDeploymentTask
,IInferenceTask
,IEvaluationTask
,IOptimizationTask
Task implementation for OTXDetection using OpenVINO backend.
- deploy(output_model: ModelEntity) None [source]#
Deploy function of OpenVINODetectionTask.
- evaluate(output_resultset: ResultSetEntity, evaluation_metric: str | None = None)[source]#
Evaluate function of OpenVINODetectionTask.
- explain(dataset: DatasetEntity, explain_parameters: ExplainParameters | None = None) DatasetEntity [source]#
Explain function of OpenVINODetectionTask.
- infer(dataset: DatasetEntity, inference_parameters: InferenceParameters | None = None) DatasetEntity [source]#
Infer function of OpenVINODetectionTask.
- load_config() Dict [source]#
Load configurable parameters from model adapter.
- Returns:
config dictionary
- Return type:
ADDict
- load_inferencer() OpenVINODetectionInferencer | OpenVINOMaskInferencer | OpenVINORotatedRectInferencer | OpenVINOTileClassifierWrapper [source]#
load_inferencer function of OpenVINO Detection Task.
- optimize(optimization_type: OptimizationType, dataset: DatasetEntity, output_model: ModelEntity, optimization_parameters: OptimizationParameters | None = None)[source]#
Optimize function of OpenVINODetectionTask.
- property hparams#
Hparams of OpenVINO Detection Task.
- class otx.algorithms.detection.adapters.openvino.task.OpenVINOMaskInferencer(hparams: DetectionConfig, label_schema: LabelSchemaEntity, model_file: str | bytes, weight_file: str | bytes | None = None, device: str = 'CPU', num_requests: int = 1, model_configuration: Dict[str, Any] = {})[source]#
Bases:
BaseInferencerWithConverter
Mask Inferencer implementation for OTXDetection using OpenVINO backend.
- class otx.algorithms.detection.adapters.openvino.task.OpenVINORotatedRectInferencer(hparams: DetectionConfig, label_schema: LabelSchemaEntity, model_file: str | bytes, weight_file: str | bytes | None = None, device: str = 'CPU', num_requests: int = 1, model_configuration: Dict[str, Any] = {})[source]#
Bases:
BaseInferencerWithConverter
Rotated Rect Inferencer implementation for OTXDetection using OpenVINO backend.
- class otx.algorithms.detection.adapters.openvino.task.OpenVINOTileClassifierWrapper(inferencer: BaseInferencerWithConverter, tile_size: int = 400, overlap: float = 0.5, max_number: int = 100, tile_ir_scale_factor: float = 1.0, tile_classifier_model_file: str | bytes | None = None, tile_classifier_weight_file: str | bytes | None = None, device: str = 'CPU', num_requests: int = 1, mode: str = 'async')[source]#
Bases:
BaseInferencerWithConverter
Wrapper for OpenVINO Tiling.
- Parameters:
inferencer (BaseInferencerWithConverter) – inferencer to wrap
tile_size (int) – tile size
overlap (float) – overlap ratio between tiles
max_number (int) – maximum number of objects per image
tile_ir_scale_factor (float, optional) – scale factor for tile size
tile_classifier_model_file (Union[str, bytes, None], optional) – tile classifier xml. Defaults to None.
tile_classifier_weight_file (Union[str, bytes, None], optional) – til classifier weight bin. Defaults to None.
device (str, optional) – device to run inference on, such as CPU, GPU or MYRIAD. Defaults to “CPU”.
num_requests (int, optional) – number of request for OpenVINO adapter. Defaults to 1.
mode (str, optional) – run inference in sync or async mode. Defaults to “async”.