otx.algorithms.anomaly.adapters.anomalib.plugins#

Plugin for mixed-precision training on XPU.

Classes

MixedPrecisionXPUPlugin([scaler])

Plugin for Automatic Mixed Precision (AMP) training with torch.xpu.autocast.

class otx.algorithms.anomaly.adapters.anomalib.plugins.MixedPrecisionXPUPlugin(scaler: Any | None = None)[source]#

Bases: PrecisionPlugin

Plugin for Automatic Mixed Precision (AMP) training with torch.xpu.autocast.

Parameters:

scaler – An optional torch.cuda.amp.GradScaler to use.

clip_gradients(optimizer: Optimizer, clip_val: int | float = 0.0, gradient_clip_algorithm: GradClipAlgorithmType = GradClipAlgorithmType.NORM) None[source]#

Handle grad clipping with scaler.

forward_context() Generator[None, None, None][source]#

Enable autocast context.

load_state_dict(state_dict: Dict[str, Any]) None[source]#

Loads state dict to the plugin.

optimizer_step(optimizer: Optimizable, model: LightningModule, optimizer_idx: int, closure: Callable[[], Any], **kwargs: Any) Any[source]#

Make an optimizer step using scaler if it was passed.

pre_backward(tensor: Tensor, module: LightningModule) Tensor[source]#

Apply grad scaler before backward.

state_dict() Dict[str, Any][source]#

Returns state dict of the plugin.