otx.core.model.anomaly#

Anomaly Lightning OTX model.

Classes

OTXAnomaly(label_info, input_size)

Methods used to make OTX model compatible with the Anomalib model.

class otx.core.model.anomaly.OTXAnomaly(label_info: LabelInfoTypes, input_size: tuple[int, int])[source]#

Bases: OTXModel

Methods used to make OTX model compatible with the Anomalib model.

Parameters:

input_size (tuple[int, int] | None) – Model input size in the order of height and width. Defaults to None.

configure_callbacks() list[Callback][source]#

Get all necessary callbacks required for training and post-processing on Anomalib models.

export(output_dir: Path, base_name: str, export_format: OTXExportFormatType, precision: OTXPrecisionType = OTXPrecisionType.FP32, to_exportable_code: bool = False) Path[source]#

Export this model to the specified output directory.

Parameters:
  • output_dir (Path) – directory for saving the exported model

  • base_name – (str): base name for the exported model file. Extension is defined by the target export format

  • export_format (OTXExportFormatType) – format of the output model

  • precision (OTXExportPrecisionType) – precision of the output model

  • to_exportable_code (bool) – flag to export model in exportable code with demo package

Returns:

path to the exported model.

Return type:

Path

get_dummy_input(batch_size: int = 1) AnomalyClassificationDataBatch | AnomalySegmentationDataBatch | AnomalyDetectionDataBatch[source]#

Returns a dummy input for anomaly model.

on_load_checkpoint(checkpoint: dict[str, Any]) None[source]#

Callback on loading checkpoint.

on_predict_batch_end(outputs: dict, batch: AnomalyClassificationDataBatch | AnomalySegmentationDataBatch | AnomalyDetectionDataBatch, batch_idx: int, dataloader_idx: int = 0) None[source]#

Wrap the outputs to OTX format.

Since outputs need to be replaced inplace, we can’t change the datatype of outputs. That’s why outputs is cleared and replaced with the new outputs. The problem with this is that Instead of engine.test() returning [BatchPrediction,…], it returns [{prediction: BatchPrediction}, {…}, …]

on_save_checkpoint(checkpoint: dict[str, Any]) None[source]#

Callback on saving checkpoint.

property input_size: tuple[int, int]#

Returns the input size of the model.

Returns:

The input size of the model as a tuple of (height, width).

Return type:

tuple[int, int]

property task: TaskType#

Return the task type of the model.

property trainable_model: str | None#

Use this to return the name of the model that needs to be trained.

This might not be the cleanest solution.

Some models have multiple architectures and only one of them needs to be trained. However the optimizer is configured in the Anomalib’s lightning model. This can be used to inform the OTX lightning model which model to train.