otx.engine#

API for OTX Entry-Point User.

Classes

Engine(*[, data_root, task, work_dir, ...])

OTX Engine.

class otx.engine.Engine(*, data_root: str | Path | PathLike | None = None, task: OTXTaskType | None = None, work_dir: str | Path | PathLike = './otx-workspace', datamodule: OTXDataModule | None = None, model: OTXModel | str | None = None, checkpoint: str | Path | PathLike | None = None, device: DeviceType = DeviceType.auto, num_devices: int = 1, **kwargs)[source]#

Bases: object

OTX Engine.

This class defines the Engine for OTX, which governs each step of the OTX workflow.

Example

The following examples show how to use the Engine class.

Auto-Configuration with data_root:

engine = Engine(
    data_root=<dataset/path>,
)

Create Engine with Custom OTXModel:

engine = Engine(
    data_root=<dataset/path>,
    model=OTXModel(...),
    checkpoint=<checkpoint/path>,
)

Create Engine with Custom OTXDataModule:

engine = Engine(
    model = OTXModel(...),
    datamodule = OTXDataModule(...),
)

Initializes the OTX Engine.

Parameters:
  • data_root (PathLike | None, optional) – Root directory for the data. Defaults to None.

  • task (OTXTaskType | None, optional) – The type of OTX task. Defaults to None.

  • work_dir (PathLike, optional) – Working directory for the engine. Defaults to “./otx-workspace”.

  • datamodule (OTXDataModule | None, optional) – The data module for the engine. Defaults to None.

  • model (OTXModel | str | None, optional) – The model for the engine. Defaults to None.

  • checkpoint (PathLike | None, optional) – Path to the checkpoint file. Defaults to None.

  • device (DeviceType, optional) – The device type to use. Defaults to DeviceType.auto.

  • num_devices (int, optional) – The number of devices to use. If it is 2 or more, it will behave as multi-gpu.

  • **kwargs – Additional keyword arguments for pl.Trainer.

benchmark(checkpoint: str | Path | PathLike | None = None, batch_size: int = 1, n_iters: int = 10, extended_stats: bool = False, print_table: bool = True) dict[str, str][source]#

Executes model micro benchmarking on random data.

Benchmark can provide latency, throughput, number of parameters, and theoretical computational complexity with batch size 1. The latter two characteristics are available for torch model recipes only. Before the measurements, a warm-up is done.

Parameters:
  • checkpoint (PathLike | None, optional) – Path to checkpoint. Optional for torch models. Defaults to None.

  • batch_size (int, optional) – Batch size for benchmarking. Defaults to 1.

  • n_iters (int, optional) – Number of iterations to average on. Defaults to 10.

  • extended_stats (bool, optional) – Flag that enables printing of per module complexity for torch model. Defaults to False.

  • print_table (bool, optional) – Flag that enables printing the benchmark results in a rich table. Defaults to True.

Returns:

a dict with the benchmark results.

Return type:

dict[str, str]

Example

>>> engine.benchmark(
...     checkpoint=<checkpoint-path>,
...     batch_size=1,
...     n_iters=20,
...     extended_stats=True,
... )
CLI Usage:
  1. To run benchmark by specifying the work_dir where did the training, run

    `shell >>> otx benchmark --work_dir <WORK_DIR_PATH, str> `

  2. To run benchmark by specifying the checkpoint, run

    `shell >>> otx benchmark \ ...     --work_dir <WORK_DIR_PATH, str> \ ...     --checkpoint <CKPT_PATH, str> `

  3. To run benchmark using the configuration, launch

    `shell >>> otx benchmark \ ...     --config <CONFIG_PATH> \ ...     --data_root <DATASET_PATH, str> \ ...     --checkpoint <CKPT_PATH, str> `

explain(checkpoint: PathLike | None = None, datamodule: EVAL_DATALOADERS | OTXDataModule | None = None, explain_config: ExplainConfig | None = None, dump: bool | None = False, **kwargs) list | None[source]#

Run XAI using the specified model and data (test subset).

Parameters:
  • checkpoint (PathLike | None, optional) – The path to the checkpoint file to load the model from.

  • datamodule (EVAL_DATALOADERS | OTXDataModule | None, optional) – The data module to use for predictions.

  • explain_config (ExplainConfig | None, optional) – Config used to handle saliency maps.

  • dump (bool) – Whether to dump “saliency_map” or not.

  • **kwargs – Additional keyword arguments for pl.Trainer configuration.

Returns:

Saliency maps.

Return type:

list

Example

>>> engine.explain(
...     datamodule=OTXDataModule(),
...     checkpoint=<checkpoint/path>,
...     explain_config=ExplainConfig(),
...     dump=True,
... )
CLI Usage:
  1. To run XAI with the torch model in work_dir, run

    `shell >>> otx explain \ ...     --work_dir <WORK_DIR_PATH, str> `

  2. To run XAI using the specified model (torch or IR), run

    `shell >>> otx explain \ ...     --work_dir <WORK_DIR_PATH, str> \ ...     --checkpoint <CKPT_PATH, str> `

  3. To run XAI using the configuration, run

    `shell >>> otx explain \ ...     --config <CONFIG_PATH> --data_root <DATASET_PATH, str> \ ...     --checkpoint <CKPT_PATH, str> `

export(checkpoint: str | Path | PathLike | None = None, export_format: OTXExportFormatType = OTXExportFormatType.OPENVINO, export_precision: OTXPrecisionType = OTXPrecisionType.FP32, explain: bool = False, export_demo_package: bool = False) Path[source]#

Export the trained model to OpenVINO Intermediate Representation (IR) or ONNX formats.

Parameters:
  • checkpoint (PathLike | None, optional) – Checkpoint to export. Defaults to None.

  • export_config (ExportConfig | None, optional) – Config that allows to set export

  • None. (format and precision. Defaults to) –

  • explain (bool) – Whether to get “saliency_map” and “feature_vector” or not.

  • export_demo_package (bool) – Whether to export demo package with the model. Only OpenVINO model can be exported with demo package.

Returns:

Path to the exported model.

Return type:

Path

Example

>>> engine.export(
...     checkpoint=<checkpoint/path>,
...     export_format=OTXExportFormatType.OPENVINO,
...     export_precision=OTXExportPrecisionType.FP32,
...     explain=True,
... )
CLI Usage:
  1. To export a model with default setting (OPENVINO, FP32), run

    `shell >>> otx export --work_dir <WORK_DIR_PATH, str> `

  2. To export a specific checkpoint, run

    `shell >>> otx export --config <CONFIG_PATH, str> --checkpoint <CKPT_PATH, str> `

  3. To export a model with precision FP16 and format ONNX, run

    `shell >>> otx export ... \ ...     --export_precision FP16 --export_format ONNX `

  4. To export model with ‘saliency_map’ and ‘feature_vector’, run

    `shell >>> otx export ... \ ...     --explain True `

classmethod from_config(config_path: str | Path | PathLike, data_root: str | Path | PathLike | None = None, work_dir: str | Path | PathLike | None = None, **kwargs) Engine[source]#

Builds the engine from a configuration file.

Parameters:
  • config_path (PathLike) – The configuration file path.

  • data_root (PathLike | None) – Root directory for the data. Defaults to None. If data_root is None, use the data_root from the configuration file.

  • work_dir (PathLike | None, optional) – Working directory for the engine. Defaults to None. If work_dir is None, use the work_dir from the configuration file.

  • kwargs – Arguments that can override the engine’s arguments.

Returns:

An instance of the Engine class.

Return type:

Engine

Example

>>> engine = Engine.from_config(
...     config="config.yaml",
... )
classmethod from_model_name(model_name: str, task: OTXTaskType, data_root: str | Path | PathLike | None = None, work_dir: str | Path | PathLike | None = None, **kwargs) Engine[source]#

Builds the engine from a model name.

Parameters:
  • model_name (str) – The model name.

  • task (OTXTaskType) – The type of OTX task.

  • data_root (PathLike | None) – Root directory for the data. Defaults to None. If data_root is None, use the data_root from the configuration file.

  • work_dir (PathLike | None, optional) – Working directory for the engine. Defaults to None. If work_dir is None, use the work_dir from the configuration file.

  • kwargs – Arguments that can override the engine’s arguments.

Returns:

An instance of the Engine class.

Return type:

Engine

Example

>>> engine = Engine.from_model_name(
...     model_name="atss_mobilenetv2",
...     task="DETECTION",
...     data_root=<dataset/path>,
... )
If you want to override configuration from default config:
>>> overriding = {
...     "data.train_subset.batch_size": 2,
...     "data.test_subset.subset_name": "TESTING",
... }
>>> engine = Engine(
...     model_name="atss_mobilenetv2",
...     task="DETECTION",
...     data_root=<dataset/path>,
...     **overriding,
... )
optimize(checkpoint: PathLike | None = None, datamodule: TRAIN_DATALOADERS | OTXDataModule | None = None, max_data_subset_size: int | None = None, export_demo_package: bool = False) Path[source]#

Applies NNCF.PTQ to the underlying models (now works only for OV models).

PTQ performs int-8 quantization on the input model, so the resulting model comes in mixed precision (some operations, however, remain in FP32).

Parameters:
  • checkpoint (str | Path | None, optional) – Checkpoint to optimize. Defaults to None.

  • datamodule (TRAIN_DATALOADERS | OTXDataModule | None, optional) – The data module to use for optimization.

  • max_data_subset_size (int | None) – The maximum size of the train subset from datamodule that would be

  • set (used for model optimization. If not) –

  • it's (NNCF.PTQ will select subset size according to) –

  • settings. (default) –

  • export_demo_package (bool) – Whether to export demo package with optimized models.

  • package. (It outputs zip archive with stand-alone demo) –

Returns:

path to the optimized model.

Return type:

Path

Example

>>> engine.optimize(
...     checkpoint=<checkpoint/path>,
...     datamodule=OTXDataModule(),
...     checkpoint=<checkpoint/path>,
... )
CLI Usage:
  1. To optimize a model with IR Model, run

    `shell >>> otx optimize \ ...     --work_dir <WORK_DIR_PATH, str> \ ...     --checkpoint <IR_MODEL_WEIGHT_PATH, str> `

  2. To optimize a specific OVModel class with XML, run

    `shell >>> otx optimize \ ...     --data_root <DATASET_PATH, str> \ ...     --checkpoint <IR_MODEL_WEIGHT_PATH, str> \ ...     --model <CONFIG | CLASS_PATH_OR_NAME, OVModel> \ ...     --model.model_name=<PATH_TO_IR_XML, str> `

predict(checkpoint: PathLike | None = None, datamodule: EVAL_DATALOADERS | OTXDataModule | None = None, return_predictions: bool | None = None, explain: bool = False, explain_config: ExplainConfig | None = None, **kwargs) list | None[source]#

Run predictions using the specified model and data.

Parameters:
  • checkpoint (PathLike | None, optional) – The path to the checkpoint file to load the model from.

  • datamodule (EVAL_DATALOADERS | OTXDataModule | None, optional) – The data module to use for predictions.

  • return_predictions (bool | None, optional) – Whether to return the predictions or not.

  • explain (bool, optional) – Whether to dump “saliency_map” and “feature_vector” or not.

  • explain_config (ExplainConfig | None, optional) – Explain configuration used for saliency map post-processing

  • **kwargs – Additional keyword arguments for pl.Trainer configuration.

Returns:

The predictions if return_predictions is True, otherwise None.

Return type:

list | None

Example

>>> engine.predict(
...     datamodule=OTXDataModule(),
...     checkpoint=<checkpoint/path>,
...     return_predictions=True,
...     explain=True,
... )
CLI Usage:
  1. To predict a model with work_dir, run

    `shell >>> otx predict --work_dir <WORK_DIR_PATH, str> `

  2. To predict a specific model, run

    `shell >>> otx predict \ ...     --work_dir <WORK_DIR_PATH, str> \ ...     --checkpoint <CKPT_PATH, str> `

  3. To predict with configuration file, run

    `shell >>> otx predict \ ...     --config <CONFIG_PATH, str> \ ...     --checkpoint <CKPT_PATH, str> `

test(checkpoint: PathLike | None = None, datamodule: EVAL_DATALOADERS | OTXDataModule | None = None, metric: MetricCallable | None = None, **kwargs) dict[source]#

Run the testing phase of the engine.

Parameters:
  • checkpoint (PathLike | None, optional) – Path to the checkpoint file to load the model from. Defaults to None.

  • datamodule (EVAL_DATALOADERS | OTXDataModule | None, optional) – The data module containing the test data.

  • metric (MetricCallable | None) – If not None, it will override OTXModel.metric_callable with the given metric callable. It will temporarilly change the evaluation metric for the validation and test.

  • **kwargs – Additional keyword arguments for pl.Trainer configuration.

Returns:

Dictionary containing the callback metrics from the trainer.

Return type:

dict

Example

>>> engine.test(
...     datamodule=OTXDataModule(),
...     checkpoint=<checkpoint/path>,
... )
CLI Usage:
  1. To eval model by specifying the work_dir where did the training, run

    `shell >>> otx test --work_dir <WORK_DIR_PATH, str> `

  2. To eval model a specific checkpoint, run

    `shell >>> otx test --work_dir <WORK_DIR_PATH, str> --checkpoint <CKPT_PATH, str> `

  3. Can pick a model.

    `shell >>> otx test \ ...     --model <CONFIG | CLASS_PATH_OR_NAME> \ ...     --data_root <DATASET_PATH, str> \ ...     --checkpoint <CKPT_PATH, str> `

  4. To eval with configuration file, run

    `shell >>> otx test --config <CONFIG_PATH, str> --checkpoint <CKPT_PATH, str> `

train(max_epochs: int = 10, seed: int | None = None, deterministic: bool | Literal['warn'] = False, precision: _PRECISION_INPUT | None = '32', val_check_interval: int | float | None = None, callbacks: list[Callback] | Callback | None = None, logger: Logger | Iterable[Logger] | bool | None = None, resume: bool = False, metric: MetricCallable | None = None, run_hpo: bool = False, hpo_config: HpoConfig = HpoConfig(search_space=None, save_path=None, mode='max', num_trials=None, num_workers=1, expected_time_ratio=4, maximum_resource=None, prior_hyper_parameters=None, acceptable_additional_time_ratio=1.0, minimum_resource=None, reduction_factor=3, asynchronous_bracket=True, asynchronous_sha=False, metric_name=None, adapt_bs_search_space_max_val='None', progress_update_callback=None, callbacks_to_exclude=None), checkpoint: PathLike | None = None, adaptive_bs: Literal['None', 'Safe', 'Full'] = 'None', **kwargs) dict[str, Any][source]#

Trains the model using the provided LightningModule and OTXDataModule.

Parameters:
  • max_epochs (int | None, optional) – The maximum number of epochs. Defaults to None.

  • seed (int | None, optional) – The random seed. Defaults to None.

  • deterministic (bool | Literal["warn"]) – Whether to enable deterministic behavior. Also, can be set to warn to avoid failures, because some operations don’t support deterministic mode. Defaults to False.

  • precision (_PRECISION_INPUT | None, optional) – The precision of the model. Defaults to 32.

  • val_check_interval (int | float | None, optional) – The validation check interval. Defaults to None.

  • callbacks (list[Callback] | Callback | None, optional) – The callbacks to be used during training.

  • logger (Logger | Iterable[Logger] | bool | None, optional) – The logger(s) to be used. Defaults to None.

  • resume (bool, optional) – If True, tries to resume training from existing checkpoint.

  • metric (MetricCallable | None) – If not None, it will override OTXModel.metric_callable with the given metric callable. It will temporarilly change the evaluation metric for the validation and test.

  • run_hpo (bool, optional) – If True, optimizer hyper parameters before training a model.

  • hpo_config (HpoConfig | None, optional) – Configuration for HPO.

  • checkpoint (PathLike | None, optional) – Path to the checkpoint file. Defaults to None.

  • adaptive_bs (Literal["None", "Safe", "Full"]) – Change the actual batch size depending on the current GPU status. Safe => Prevent GPU out of memory. Full => Find a batch size using most of GPU memory.

  • **kwargs – Additional keyword arguments for pl.Trainer configuration.

Returns:

A dictionary containing the callback metrics from the trainer.

Return type:

dict[str, Any]

Example

>>> engine.train(
...     max_epochs=3,
...     seed=1234,
...     deterministic=False,
...     precision="32",
... )
CLI Usage:
  1. Can train with data_root only. then OTX will provide default training configuration.

    `shell >>> otx train --data_root <DATASET_PATH, str> `

  2. Can pick a model or datamodule as Config file or Class.

    `shell >>> otx train \ ...     --data_root <DATASET_PATH, str> \ ...     --model <CONFIG | CLASS_PATH_OR_NAME, OTXModel> \ ...     --data <CONFIG | CLASS_PATH_OR_NAME, OTXDataModule> `

  3. Of course, can override the various values with commands.

    `shell >>> otx train \ ...     --data_root <DATASET_PATH, str> \ ...     --max_epochs <EPOCHS, int> \ ...     --checkpoint <CKPT_PATH, str> `

  4. To train with configuration file, run

    `shell >>> otx train --data_root <DATASET_PATH, str> --config <CONFIG_PATH, str> `

  5. To reproduce the existing training with work_dir, run

    `shell >>> otx train --work_dir <WORK_DIR_PATH, str> `

property datamodule: OTXDataModule#

Returns the datamodule object associated with the engine.

Returns:

The OTXDataModule object.

Return type:

OTXDataModule

property device: DeviceConfig#

Device engine uses.

property model: OTXModel#

Returns the model object associated with the engine.

Returns:

The OTXModel object.

Return type:

OTXModel

property num_devices: int#

Number of devices for Engine use.

property trainer: Trainer#

Returns the trainer object associated with the engine.

To get this property, you should execute Engine.train() function first.

Returns:

The trainer object.

Return type:

Trainer

property trainer_params: dict#

Returns the parameters used for training the model.

Returns:

A dictionary containing the training parameters.

Return type:

dict

property work_dir: str | Path | PathLike#

Work directory.