otx.api.usecases.exportable_code.demo.demo_package#

Initialization of demo package.

Functions

create_output_converter(task_type, labels, ...)

Create annotation converter according to kind of task.

create_visualizer(_task_type[, no_show, output])

Create visualizer according to kind of task.

Classes

SyncExecutor(model, visualizer)

Synchronous executor for model inference.

AsyncExecutor(model, visualizer)

Async inferencer.

ChainExecutor(models, visualizer)

Sync executor for task-chain inference.

ModelContainer(model_dir[, device])

Class for storing the model wrapper based on Model API and needed parameters of model.

class otx.api.usecases.exportable_code.demo.demo_package.AsyncExecutor(model: ModelContainer, visualizer: Visualizer)[source]#

Bases: object

Async inferencer.

Parameters:
  • model – model for inference

  • visualizer – visualizer of inference results

render_result(results: Tuple[Any, dict]) ndarray[source]#

Render for results of inference.

run(input_stream: int | str, loop: bool = False) None[source]#

Async inference for input stream (image, video stream, camera).

class otx.api.usecases.exportable_code.demo.demo_package.ChainExecutor(models: List[ModelContainer], visualizer: Visualizer)[source]#

Bases: object

Sync executor for task-chain inference.

Parameters:
  • models – list of models for inference

  • visualizer – visualizer of inference results

static crop(item: ndarray, parent_annotation: Annotation, item_annotation: Annotation) Tuple[ndarray, Annotation][source]#

Crop operation between chain stages.

run(input_stream: int | str, loop: bool = False) None[source]#

Run demo using input stream (image, video stream, camera).

single_run(input_image: ndarray) AnnotationSceneEntity[source]#

Inference for single image.

class otx.api.usecases.exportable_code.demo.demo_package.ModelContainer(model_dir: Path, device='CPU')[source]#

Bases: object

Class for storing the model wrapper based on Model API and needed parameters of model.

Parameters:

model_dir (Path) – path to model directory

__call__(input_data: ndarray) Tuple[Any, dict][source]#

Infer entry wrapper.

infer(frame)[source]#

Infer with original image.

Parameters:

frame (np.ndarray) – image

Returns:

prediction frame_meta (Dict): dict with original shape

Return type:

annotation_scene (AnnotationScene)

infer_tile(frame)[source]#

Infer by patching full image to tiles.

Parameters:

frame (np.ndarray) – image

Returns:

prediction frame_meta (Dict): dict with original shape

Return type:

annotation_scene (AnnotationScene)

setup_tiler(model_dir, device) DetectionTiler | InstanceSegmentationTiler | None[source]#

Setup tiler for model.

Parameters:
  • model_dir (str) – model directory

  • device (str) – device to run model on

Returns:

Tiler object or None

Return type:

Optional

property labels: LabelSchemaEntity#

Labels property.

property task_type: TaskType#

Task type property.

class otx.api.usecases.exportable_code.demo.demo_package.SyncExecutor(model: ModelContainer, visualizer: Visualizer)[source]#

Bases: object

Synchronous executor for model inference.

Parameters:
  • model (ModelContainer) – model for inference

  • visualizer (Visualizer) – visualizer of inference results. Defaults to None.

run(input_stream: int | str, loop: bool = False) None[source]#

Run demo using input stream (image, video stream, camera).

otx.api.usecases.exportable_code.demo.demo_package.create_output_converter(task_type: TaskType, labels: LabelSchemaEntity, model_params: Dict[Any, Any])[source]#

Create annotation converter according to kind of task.

otx.api.usecases.exportable_code.demo.demo_package.create_visualizer(_task_type: TaskType, no_show: bool = False, output: str | None = None)[source]#

Create visualizer according to kind of task.