otx.api.usecases.exportable_code.visualizers#

Initialization of visualizers.

Classes

AnomalyVisualizer([window_name, show_count, ...])

Visualize the predicted output by drawing the annotations on the input image.

IVisualizer()

Interface for converter.

Visualizer([window_name, show_count, ...])

Visualize the predicted output by drawing the annotations on the input image.

class otx.api.usecases.exportable_code.visualizers.AnomalyVisualizer(window_name: str | None = None, show_count: bool = False, is_one_label: bool = False, no_show: bool = False, delay: int | None = None)[source]#

Bases: Visualizer

Visualize the predicted output by drawing the annotations on the input image.

Example

>>> predictions = inference_model.predict(frame)
>>> annotation = prediction_converter.convert_to_annotation(predictions)
>>> output = visualizer.draw(frame, annotation.shape, annotation.get_labels())
>>> visualizer.show(output)
draw(image: ndarray, annotation: AnnotationSceneEntity, meta: dict) ndarray[source]#

Draw annotations on the image.

Parameters:
  • image – Input image

  • annotation – Annotations to be drawn on the input image

  • metadata – Metadata with saliency map

Returns:

Output image with annotations.

static to_heat_mask(mask: ndarray) ndarray[source]#

Create heat mask from saliency map.

Parameters:

mask – saliency map

class otx.api.usecases.exportable_code.visualizers.IVisualizer[source]#

Bases: object

Interface for converter.

abstract draw(image: ndarray, annotation: AnnotationSceneEntity, meta: dict) ndarray[source]#

Draw annotations on the image.

Parameters:
  • image – Input image

  • annotation – Annotations to be drawn on the input image

  • metadata – Metadata is needed to render

Returns:

Output image with annotations.

abstract is_quit() bool[source]#

Check if user wishes to quit.

abstract show(image: ndarray) None[source]#

Show result image.

abstract video_delay(elapsed_time: float, streamer: BaseStreamer) None[source]#

Check if video frames were inferenced faster than the original video FPS and delay visualizer if so.

Parameters:
  • elapsed_time (float) – Time spent on frame inference

  • streamer (BaseStreamer) – Streamer object

class otx.api.usecases.exportable_code.visualizers.Visualizer(window_name: str | None = None, show_count: bool = False, is_one_label: bool = False, no_show: bool = False, delay: int | None = None, output: str | None = None)[source]#

Bases: IVisualizer

Visualize the predicted output by drawing the annotations on the input image.

Example

>>> predictions = inference_model.predict(frame)
>>> annotation = prediction_converter.convert_to_annotation(predictions)
>>> output = visualizer.draw(frame, annotation.shape, annotation.get_labels())
>>> visualizer.show(output)
draw(image: ndarray, annotation: AnnotationSceneEntity, meta: dict | None = None) ndarray[source]#

Draw annotations on the image.

Parameters:
  • image – Input image

  • annotation – Annotations to be drawn on the input image

Returns:

Output image with annotations.

is_quit() bool[source]#

Check user wish to quit.

show(image: ndarray) None[source]#

Show result image.

Parameters:

image (np.ndarray) – Image to be shown.

video_delay(elapsed_time: float, streamer: BaseStreamer)[source]#

Check if video frames were inferenced faster than the original video FPS and delay visualizer if so.

Parameters:
  • elapsed_time (float) – Time spent on frame inference

  • streamer (BaseStreamer) – Streamer object