Reverse Distillation

This is the implementation of the Anomaly Detection via Reverse Distillation from One-Class Embedding paper.

Model Type: Segmentation

Description

Reverse Distillation model consists of three networks. The first is a pre-trained feature extractor (E). The next two are the one-class bottleneck embedding (OCBE) and the student decoder network (D). The backbone E is a ResNet model pre-trained on ImageNet dataset. During the forward pass, features from three ResNet block are extracted. These features are encoded by concatenating the three feature maps using the multi-scale feature fusion block of OCBE and passed to the decoder D. The decoder network is symmetrical to the feature extractor but reversed. During training, outputs from these symmetrical blocks are forced to be similar to the corresponding feature extractor layers by using cosine distance as the loss metric.

During testing, a similar step is followed but this time the cosine distance between the feature maps is used to indicate the presence of anomalies. The distance maps from all the three layers are up-sampled to the image size and added (or multiplied) to produce the final feature map. Gaussian blur is applied to the output map to make it smoother. Finally, the anomaly map is generated by applying min-max normalization on the output map.

Architecture

Reverse Distillation Architecture

Usage

$ python tools/train.py --model reverse_distillation

PyTorch model for Reverse Distillation.

class anomalib.models.reverse_distillation.torch_model.ReverseDistillationModel(backbone: str, input_size: tuple[int, int], layers: list[str], anomaly_map_mode: str, pre_trained: bool = True)[source]

Bases: Module

Reverse Distillation Model.

Parameters:
  • backbone (str) – Name of the backbone used for encoder and decoder

  • input_size (tuple[int, int]) – Size of input image

  • layers (list[str]) – Name of layers from which the features are extracted.

  • anomaly_map_mode (str) – Mode used to generate anomaly map. Options are between multiply and add.

  • pre_trained (bool, optional) – Boolean to check whether to use a pre_trained backbone.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(images: Tensor) Tensor | list[Tensor] | tuple[list[Tensor]][source]

Forward-pass images to the network.

During the training mode the model extracts features from encoder and decoder networks. During evaluation mode, it returns the predicted anomaly map.

Parameters:

images (Tensor) – Batch of images

Returns:

Encoder and decoder features in training mode,

else anomaly maps.

Return type:

Tensor | list[Tensor] | tuple[list[Tensor]]

training: bool

Anomaly Detection via Reverse Distillation from One-Class Embedding.

https://arxiv.org/abs/2201.10703v2

class anomalib.models.reverse_distillation.lightning_model.ReverseDistillation(input_size: tuple[int, int], backbone: str, layers: list[str], anomaly_map_mode: str, lr: float, beta1: float, beta2: float, pre_trained: bool = True)[source]

Bases: AnomalyModule

PL Lightning Module for Reverse Distillation Algorithm.

Parameters:
  • input_size (tuple[int, int]) – Size of model input

  • backbone (str) – Backbone of CNN network

  • layers (list[str]) – Layers to extract features from the backbone CNN

  • pre_trained (bool, optional) – Boolean to check whether to use a pre_trained backbone.

configure_optimizers() Adam[source]

Configures optimizers for decoder and bottleneck.

Note

This method is used for the existing CLI. When PL CLI is introduced, configure optimizers method will be

deprecated, and optimizers will be configured from either config.yaml file or from CLI.

Returns:

Adam optimizer for each decoder

Return type:

Optimizer

training_step(batch: dict[str, str | Tensor], *args, **kwargs) STEP_OUTPUT[source]

Training Step of Reverse Distillation Model.

Features are extracted from three layers of the Encoder model. These are passed to the bottleneck layer that are passed to the decoder network. The loss is then calculated based on the cosine similarity between the encoder and decoder features.

Parameters:

(batch (batch) – dict[str, str | Tensor]): Input batch

Returns:

Feature Map

validation_step(batch: dict[str, str | Tensor], *args, **kwargs) STEP_OUTPUT[source]

Validation Step of Reverse Distillation Model.

Similar to the training step, encoder/decoder features are extracted from the CNN for each batch, and anomaly map is computed.

Parameters:

batch (dict[str, str | Tensor]) – Input batch

Returns:

Dictionary containing images, anomaly maps, true labels and masks. These are required in validation_epoch_end for feature concatenation.

class anomalib.models.reverse_distillation.lightning_model.ReverseDistillationLightning(hparams: DictConfig | ListConfig)[source]

Bases: ReverseDistillation

PL Lightning Module for Reverse Distillation Algorithm.

Parameters:

hparams (DictConfig | ListConfig) – Model parameters

configure_callbacks() list[EarlyStopping][source]

Configure model-specific callbacks.

Note

This method is used for the existing CLI. When PL CLI is introduced, configure callback method will be

deprecated, and callbacks will be configured from either config.yaml file or from CLI.

Compute Anomaly map.

class anomalib.models.reverse_distillation.anomaly_map.AnomalyMapGenerator(image_size: ListConfig | tuple, sigma: int = 4, mode: str = 'multiply')[source]

Bases: Module

Generate Anomaly Heatmap.

Parameters:
  • image_size (ListConfig, tuple) – Size of original image used for upscaling the anomaly map.

  • sigma (int) – Standard deviation of the gaussian kernel used to smooth anomaly map.

  • mode (str, optional) – Operation used to generate anomaly map. Options are add and multiply. Defaults to “multiply”.

Raises:

ValueError – In case modes other than multiply and add are passed.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(student_features: list[Tensor], teacher_features: list[Tensor]) Tensor[source]

Computes anomaly map given encoder and decoder features.

Parameters:
  • student_features (list[Tensor]) – List of encoder features

  • teacher_features (list[Tensor]) – List of decoder features

Returns:

Anomaly maps of length batch.

Return type:

Tensor

training: bool