otx.algorithms.anomaly.adapters.anomalib.plugins#
Plugin for mixed-precision training on XPU.
Classes
|
Plugin for Automatic Mixed Precision (AMP) training with |
- class otx.algorithms.anomaly.adapters.anomalib.plugins.MixedPrecisionXPUPlugin(scaler: Any | None = None)[source]#
Bases:
PrecisionPlugin
Plugin for Automatic Mixed Precision (AMP) training with
torch.xpu.autocast
.- Parameters:
scaler – An optional
torch.cuda.amp.GradScaler
to use.
- clip_gradients(optimizer: Optimizer, clip_val: int | float = 0.0, gradient_clip_algorithm: GradClipAlgorithmType = GradClipAlgorithmType.NORM) None [source]#
Handle grad clipping with scaler.
- optimizer_step(optimizer: Optimizable, model: LightningModule, optimizer_idx: int, closure: Callable[[], Any], **kwargs: Any) Any [source]#
Make an optimizer step using scaler if it was passed.