Draem¶
This is the implementation of the DRAEM paper.
Model Type: Segmentation
Description¶
DRAEM is a reconstruction based algorithm that consists of a reconstructive subnetwork and a discriminative subnetwork. DRAEM is trained on simulated anomaly images, generated by augmenting normal input images from the training set with a random Perlin noise mask extracted from an unrelated source of image data. The reconstructive subnetwork is an autoencoder architecture that is trained to reconstruct the original input images from the augmented images. The reconstructive submodel is trained using a combination of L2 loss and Structural Similarity loss. The input of the discriminative subnetwork consists of the channel-wise concatenation of the (augmented) input image and the output of the reconstructive subnetwork. The output of the discriminative subnetwork is an anomaly map that contains the predicted anomaly scores for each pixel location. The discriminative subnetwork is trained using Focal Loss.
For optimal results, DRAEM requires specifying the path to a folder of image data that will be used as the source of the anomalous pixel regions in the simulated anomaly images. The path can be specified by editing the value of the model.anomaly_source_path parameter in the config.yaml file. The authors of the original paper recommend using the DTD dataset as anomaly source.
Architecture¶

Usage¶
$ python tools/train.py --model draem
PyTorch model for the DRAEM model implementation.
- class anomalib.models.draem.torch_model.DecoderDiscriminative(base_width: int, out_channels: int = 1)[source]¶
Bases:
Module
Decoder part of the discriminator network.
- Parameters:
base_width (int) – Base dimensionality of the layers of the autoencoder.
out_channels (int) – Number of output channels.
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(act1: Tensor, act2: Tensor, act3: Tensor, act4: Tensor, act5: Tensor, act6: Tensor) Tensor [source]¶
Computes predicted anomaly class scores from the intermediate outputs of the encoder sub network.
- Parameters:
act1 (Tensor) – Encoder activations of the first block of convolutional layers.
act2 (Tensor) – Encoder activations of the second block of convolutional layers.
act3 (Tensor) – Encoder activations of the third block of convolutional layers.
act4 (Tensor) – Encoder activations of the fourth block of convolutional layers.
act5 (Tensor) – Encoder activations of the fifth block of convolutional layers.
act6 (Tensor) – Encoder activations of the sixth block of convolutional layers.
- Returns:
Predicted anomaly class scores per pixel.
- training: bool¶
- class anomalib.models.draem.torch_model.DecoderReconstructive(base_width: int, out_channels: int = 1)[source]¶
Bases:
Module
Decoder part of the reconstructive network.
- Parameters:
base_width (int) – Base dimensionality of the layers of the autoencoder.
out_channels (int) – Number of output channels.
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(act5: Tensor) Tensor [source]¶
Reconstruct the image from the activations of the bottleneck layer.
- Parameters:
act5 (Tensor) – Activations of the bottleneck layer.
- Returns:
Batch of reconstructed images.
- training: bool¶
- class anomalib.models.draem.torch_model.DiscriminativeSubNetwork(in_channels: int = 3, out_channels: int = 3, base_width: int = 64)[source]¶
Bases:
Module
Discriminative model that predicts the anomaly mask from the original image and its reconstruction.
- Parameters:
in_channels (int) – Number of input channels.
out_channels (int) – Number of output channels.
base_width (int) – Base dimensionality of the layers of the autoencoder.
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(batch: Tensor) Tensor [source]¶
Generate the predicted anomaly masks for a batch of input images.
- Parameters:
batch (Tensor) – Batch of inputs consisting of the concatenation of the original images and their reconstructions.
- Returns:
Activations of the output layer corresponding to the normal and anomalous class scores on the pixel level.
- training: bool¶
- class anomalib.models.draem.torch_model.DraemModel(sspcab: bool = False)[source]¶
Bases:
Module
DRAEM PyTorch model consisting of the reconstructive and discriminative sub networks.
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(batch: Tensor) Tensor | tuple[Tensor, Tensor] [source]¶
Compute the reconstruction and anomaly mask from an input image.
- Parameters:
x (Tensor) – batch of input images
- Returns:
Predicted confidence values of the anomaly mask. During training the reconstructed input images are returned as well.
- training: bool¶
- class anomalib.models.draem.torch_model.EncoderDiscriminative(in_channels: int, base_width: int)[source]¶
Bases:
Module
Encoder part of the discriminator network.
- Parameters:
in_channels (int) – Number of input channels.
base_width (int) – Base dimensionality of the layers of the autoencoder.
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(batch: Tensor) tuple[Tensor, Tensor, Tensor, Tensor, Tensor, Tensor] [source]¶
Convert the inputs to the salient space by running them through the encoder network.
- Parameters:
batch (Tensor) – Batch of inputs consisting of the concatenation of the original images and their reconstructions.
- Returns:
Computed feature maps for each of the layers in the encoder sub network.
- training: bool¶
- class anomalib.models.draem.torch_model.EncoderReconstructive(in_channels: int, base_width: int, sspcab: bool = False)[source]¶
Bases:
Module
Encoder part of the reconstructive network.
- Parameters:
in_channels (int) – Number of input channels.
base_width (int) – Base dimensionality of the layers of the autoencoder.
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(batch: Tensor) Tensor [source]¶
Encode a batch of input images to the salient space.
- Parameters:
batch (Tensor) – Batch of input images.
- Returns:
Feature maps extracted from the bottleneck layer.
- training: bool¶
- class anomalib.models.draem.torch_model.ReconstructiveSubNetwork(in_channels: int = 3, out_channels: int = 3, base_width=128, sspcab: bool = False)[source]¶
Bases:
Module
Autoencoder model that encodes and reconstructs the input image.
- Parameters:
in_channels (int) – Number of input channels.
out_channels (int) – Number of output channels.
base_width (int) – Base dimensionality of the layers of the autoencoder.
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(batch: Tensor) Tensor [source]¶
Encode and reconstruct the input images.
- Parameters:
batch (Tensor) – Batch of input images
- Returns:
Batch of reconstructed images.
- training: bool¶
DRÆM – A discriminatively trained reconstruction embedding for surface anomaly detection.
Paper https://arxiv.org/abs/2108.07610
- class anomalib.models.draem.lightning_model.Draem(enable_sspcab: bool = False, sspcab_lambda: float = 0.1, anomaly_source_path: str | None = None)[source]¶
Bases:
AnomalyModule
DRÆM: A discriminatively trained reconstruction embedding for surface anomaly detection.
- Parameters:
anomaly_source_path (str | None) – Path to folder that contains the anomaly source images. Random noise will be used if left empty.
- setup_sspcab() None [source]¶
Prepare the model for the SSPCAB training step by adding forward hooks for the SSPCAB layer activations.
- training_step(batch: dict[str, str | Tensor], *args, **kwargs) STEP_OUTPUT [source]¶
Training Step of DRAEM.
Feeds the original image and the simulated anomaly image through the network and computes the training loss.
- Parameters:
batch (dict[str, str | Tensor]) – Batch containing image filename, image, label and mask
- Returns:
Loss dictionary
- validation_step(batch: dict[str, str | Tensor], *args, **kwargs) STEP_OUTPUT [source]¶
Validation step of DRAEM. The Softmax predictions of the anomalous class are used as anomaly map.
- Parameters:
batch (dict[str, str | Tensor]) – Batch of input images
- Returns:
Dictionary to which predicted anomaly maps have been added.
- class anomalib.models.draem.lightning_model.DraemLightning(hparams: DictConfig | ListConfig)[source]¶
Bases:
Draem
DRÆM: A discriminatively trained reconstruction embedding for surface anomaly detection.
- Parameters:
hparams (DictConfig | ListConfig) – Model parameters