otx.algo.plugins#

Plugin for mixed-precision training on XPU.

Classes

MixedPrecisionXPUPlugin([scaler])

Plugin for Automatic Mixed Precision (AMP) training with torch.xpu.autocast.

class otx.algo.plugins.MixedPrecisionXPUPlugin(scaler: GradScaler | None = None)[source]#

Bases: Precision

Plugin for Automatic Mixed Precision (AMP) training with torch.xpu.autocast.

Parameters:

scaler – An optional torch.cuda.amp.GradScaler to use.

clip_gradients(optimizer: Optimizer, clip_val: int | float = 0.0, gradient_clip_algorithm: GradClipAlgorithmType = GradClipAlgorithmType.NORM) None[source]#

Handle grad clipping with scaler.

forward_context() Generator[None, None, None][source]#

Enable autocast context.

load_state_dict(state_dict: dict[str, Tensor]) None[source]#

Loads state dict to the plugin.

optimizer_step(optimizer: Optimizable, model: pl.LightningModule, closure: Callable, **kwargs: dict) None | dict[source]#

Make an optimizer step using scaler if it was passed.

pre_backward(tensor: Tensor, module: pl.LightningModule) Tensor[source]#

Apply grad scaler before backward.

state_dict() dict[str, Any][source]#

Returns state dict of the plugin.