Fast-Flow¶
This is the implementation of the FastFlow paper.
Model Type: Segmentation
Description¶
FastFlow is a two-dimensional normalizing flow-based probability distribution estimator. It can be used as a plug-in module with any deep feature extractor, such as ResNet and vision transformer, for unsupervised anomaly detection and localisation. In the training phase, FastFlow learns to transform the input visual feature into a tractable distribution, and in the inference phase, it assesses the likelihood of identifying anomalies.
Architecture¶

Usage¶
$ python tools/train.py --model fastflow
FastFlow Torch Model Implementation.
- class anomalib.models.fastflow.torch_model.FastflowModel(input_size: tuple[int, int], backbone: str, pre_trained: bool = True, flow_steps: int = 8, conv3x3_only: bool = False, hidden_ratio: float = 1.0)[source]¶
Bases:
Module
FastFlow.
Unsupervised Anomaly Detection and Localization via 2D Normalizing Flows.
- Parameters:
input_size (tuple[int, int]) – Model input size.
backbone (str) – Backbone CNN network
pre_trained (bool, optional) – Boolean to check whether to use a pre_trained backbone.
flow_steps (int, optional) – Flow steps.
conv3x3_only (bool, optinoal) – Use only conv3x3 in fast_flow model. Defaults to False.
hidden_ratio (float, optional) – Ratio to calculate hidden var channels. Defaults to 1.0.
- Raises:
ValueError – When the backbone is not supported.
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(input_tensor: Tensor) Tensor | list[Tensor] | tuple[list[Tensor]] [source]¶
Forward-Pass the input to the FastFlow Model.
- Parameters:
input_tensor (Tensor) – Input tensor.
- Returns:
- During training, return
(hidden_variables, log-of-the-jacobian-determinants). During the validation/test, return the anomaly map.
- Return type:
Tensor | list[Tensor] | tuple[list[Tensor]]
- training: bool¶
- anomalib.models.fastflow.torch_model.create_fast_flow_block(input_dimensions: list[int], conv3x3_only: bool, hidden_ratio: float, flow_steps: int, clamp: float = 2.0) SequenceINN [source]¶
Create NF Fast Flow Block.
This is to create Normalizing Flow (NF) Fast Flow model block based on Figure 2 and Section 3.3 in the paper.
- Parameters:
input_dimensions (list[int]) – Input dimensions (Channel, Height, Width)
conv3x3_only (bool) – Boolean whether to use conv3x3 only or conv3x3 and conv1x1.
hidden_ratio (float) – Ratio for the hidden layer channels.
flow_steps (int) – Flow steps.
clamp (float, optional) – Clamp. Defaults to 2.0.
- Returns:
FastFlow Block.
- Return type:
SequenceINN
- anomalib.models.fastflow.torch_model.subnet_conv_func(kernel_size: int, hidden_ratio: float) Callable [source]¶
Subnet Convolutional Function.
- Callable class or function
f
, called asf(channels_in, channels_out)
and should return a torch.nn.Module. Predicts coupling coefficients \(s, t\).
- Parameters:
kernel_size (int) – Kernel Size
hidden_ratio (float) – Hidden ratio to compute number of hidden channels.
- Returns:
Sequential for the subnet constructor.
- Return type:
Callable
- Callable class or function
FastFlow Lightning Model Implementation.
- class anomalib.models.fastflow.lightning_model.Fastflow(input_size: tuple[int, int], backbone: str, pre_trained: bool = True, flow_steps: int = 8, conv3x3_only: bool = False, hidden_ratio: float = 1.0)[source]¶
Bases:
AnomalyModule
PL Lightning Module for the FastFlow algorithm.
- Parameters:
input_size (tuple[int, int]) – Model input size.
backbone (str) – Backbone CNN network
pre_trained (bool, optional) – Boolean to check whether to use a pre_trained backbone.
flow_steps (int, optional) – Flow steps.
conv3x3_only (bool, optinoal) – Use only conv3x3 in fast_flow model. Defaults to False.
hidden_ratio (float, optional) – Ratio to calculate hidden var channels. Defaults to 1.0.
- class anomalib.models.fastflow.lightning_model.FastflowLightning(hparams: DictConfig | ListConfig)[source]¶
Bases:
Fastflow
PL Lightning Module for the FastFlow algorithm.
- Parameters:
hparams (DictConfig | ListConfig) – Model params
- configure_callbacks() list[EarlyStopping] [source]¶
Configure model-specific callbacks.
Note
This method is used for the existing CLI. When PL CLI is introduced, configure callback method will be
deprecated, and callbacks will be configured from either config.yaml file or from CLI.
- configure_optimizers() Optimizer [source]¶
Configures optimizers for each decoder.
Note
This method is used for the existing CLI. When PL CLI is introduced, configure optimizers method will be
deprecated, and optimizers will be configured from either config.yaml file or from CLI.
- Returns:
Adam optimizer for each decoder
- Return type:
Optimizer
FastFlow Anomaly Map Generator Implementation.
- class anomalib.models.fastflow.anomaly_map.AnomalyMapGenerator(input_size: ListConfig | tuple)[source]¶
Bases:
Module
Generate Anomaly Heatmap.
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(hidden_variables: list[Tensor]) Tensor [source]¶
Generate Anomaly Heatmap.
This implementation generates the heatmap based on the flow maps computed from the normalizing flow (NF) FastFlow blocks. Each block yields a flow map, which overall is stacked and averaged to an anomaly map.
- Parameters:
hidden_variables (list[Tensor]) – List of hidden variables from each NF FastFlow block.
- Returns:
Anomaly Map.
- Return type:
Tensor
- training: bool¶