otx.algorithms.anomaly.tools.sample#

sample.py.

This is a sample python script showing how to train an end-to-end OTX Anomaly Classification Task.

Functions

main()

Run sample.py with given CLI arguments.

parse_args()

Parse CLI arguments.

Classes

OtxAnomalyTask(dataset_path, train_subset, ...)

OTX Anomaly Classification Task.

class otx.algorithms.anomaly.tools.sample.OtxAnomalyTask(dataset_path: str, train_subset: Dict[str, str], val_subset: Dict[str, str], test_subset: Dict[str, str], model_template_path: str, seed: int | None = None)[source]#

Bases: object

OTX Anomaly Classification Task.

Initialize OtxAnomalyTask.

Parameters:
  • dataset_path (str) – Path to the MVTec dataset.

  • train_subset (Dict[str, str]) – Dictionary containing path to train annotation file and path to dataset.

  • val_subset (Dict[str, str]) – Dictionary containing path to validation annotation file and path to dataset.

  • test_subset (Dict[str, str]) – Dictionary containing path to test annotation file and path to dataset.

  • model_template_path (str) – Path to model template.

  • seed (Optional[int]) – Setting seed to a value other than 0 also marks PytorchLightning trainer’s deterministic flag to True.

Example

>>> import os
>>> os.getcwd()
'~/otx/external/anomaly'

If MVTec dataset is placed under the above directory, then we could run,

>>> model_template_path = "./configs/classification/padim/template.yaml"
>>> dataset_path = "./datasets/MVTec"
>>> task = OtxAnomalyTask(
...     dataset_path=dataset_path,
...     train_subset={"ann_file": train.json, "data_root": dataset_path},
...     val_subset={"ann_file": val.json, "data_root": dataset_path},
...     test_subset={"ann_file": test.json, "data_root": dataset_path},
...     model_template_path=model_template_path
... )
>>> task.train()
Performance(score: 1.0, dashboard: (1 metric groups))
>>> task.export()
Performance(score: 0.9756097560975608, dashboard: (1 metric groups))
static clean_up() None[source]#

Clean up the results directory used by anomalib.

create_task(task: str) Any[source]#

Create base torch or openvino task.

Parameters:

task (str) – task type. Either base or openvino.

Returns:

Base Torch or OpenVINO Task Class.

Return type:

Any

Example

>>> self.create_task(task="base")
<anomaly_classification.torch_task.AnomalyClassificationTask>
create_task_environment() TaskEnvironment[source]#

Create task environment.

static evaluate(task: IEvaluationTask, result_set: ResultSetEntity) None[source]#

Evaluate the performance of the model.

Parameters:
  • task (IEvaluationTask) – Task to evaluate the performance. Either torch or openvino.

  • result_set (ResultSetEntity) – Results set containing the true and pred datasets.

export() ModelEntity[source]#

Export the model via openvino.

export_nncf() ModelEntity[source]#

Export NNCF model via openvino.

get_dataclass() Type[AnomalyDetectionDataset] | Type[AnomalySegmentationDataset] | Type[AnomalyClassificationDataset][source]#

Gets the dataloader based on the task type.

Raises:

ValueError – Validates task type.

Returns:

Dataloader

infer(task: IInferenceTask, output_model: ModelEntity) ResultSetEntity[source]#

Get the predictions using the base Torch or OpenVINO tasks and models.

Parameters:
  • task (IInferenceTask) – Task to infer. Either torch or openvino.

  • output_model (ModelEntity) – Output model on which the weights are saved.

Returns:

Results set containing the true and pred datasets.

Return type:

ResultSetEntity

optimize() None[source]#

Optimize the model via POT.

optimize_nncf() None[source]#

Optimize the model via NNCF.

train() ModelEntity[source]#

Train the base Torch model.

otx.algorithms.anomaly.tools.sample.main() None[source]#

Run sample.py with given CLI arguments.

otx.algorithms.anomaly.tools.sample.parse_args() Namespace[source]#

Parse CLI arguments.

Returns:

CLI arguments.

Return type:

(Namespace)