otx.algorithms.common.adapters.mmcv#
Adapters for mmcv support.
Functions
|
Custom patch for multi_scale_deformable_attn_pytorch function. |
Classes
|
Simple modification to EpochBasedRunner to allow cancelling the training during an epoch. |
|
Runner With Cancel for early-stopping (Iter based). |
|
Save checkpoints periodically. |
|
Custom Evaluation hook for the OTX. |
|
Sharpness-aware Minimization optimizer hook. |
|
Hook for IB loss. |
|
Sharpness-aware Minimization optimizer hook. |
Hook for No Bias Decay Method (Bag of Tricks for Image Classification). |
|
|
Hook for SemiSL for classification. |
|
CancelTrainingHook for Training Stopping. |
|
OTXLoggerHook for Logging. |
|
OTXProgressHook for getting progress. |
|
Cancel training when a metric has stopped improving. |
|
Reduce learning rate when a metric has stopped improving. |
EnsureCorrectBestCheckpointHook. |
|
StopLossNanTrainingHook. |
|
|
Exponential moving average (EMA) momentum update hook for self-supervised methods. |
|
CompressionHook. |
|
AccuracyAwareRunner for NNCF task. |
|
TwoCropTransformHook with every specific interval. |
Memory cache hook for logging and freezing MemCacheHandler. |
|
|
Tracking loss dynamics during training and export it to Datumaro dataset format. |
- class otx.algorithms.common.adapters.mmcv.AccuracyAwareRunner(*args, nncf_config, nncf_meta=None, **kwargs)[source]#
Bases:
EpochRunnerWithCancel
AccuracyAwareRunner for NNCF task.
An mmcv training runner to be used with NNCF-based accuracy-aware training. Inherited from the standard EpochBasedRunner with the overridden “run” method. This runner does not use the “workflow” and “max_epochs” parameters that are used by the EpochBasedRunner since the training is controlled by NNCF’s AdaptiveCompressionTrainingLoop that does the scheduling of the compression-aware training loop using the parameters specified in the “accuracy_aware_training”.
- class otx.algorithms.common.adapters.mmcv.CancelTrainingHook(interval: int = 5)[source]#
Bases:
Hook
CancelTrainingHook for Training Stopping.
Periodically check whether whether a stop signal is sent to the runner during model training.
Every ‘check_interval’ iterations, the work_dir for the runner is checked to see if a file ‘.stop_training’ is present. If it is, training is stopped.
- Parameters:
interval – Period for checking for stop signal, given in iterations.
- class otx.algorithms.common.adapters.mmcv.CheckpointHookWithValResults(interval=-1, by_epoch=True, save_optimizer=True, out_dir=None, max_keep_ckpts=-1, sync_buffer=False, **kwargs)[source]#
Bases:
Hook
Save checkpoints periodically.
- Parameters:
interval (int) – The saving period. If
by_epoch=True
, interval indicates epochs, otherwise it indicates iterations. Default: -1, which means “never”.by_epoch (bool) – Saving checkpoints by epoch or by iteration. Default: True.
save_optimizer (bool) – Whether to save optimizer state_dict in the checkpoint. It is usually used for resuming experiments. Default: True.
out_dir (str, optional) – The directory to save checkpoints. If not specified,
runner.work_dir
will be used by default.max_keep_ckpts (int, optional) – The maximum checkpoints to keep. In some cases we want only the latest few checkpoints and would like to delete old ones to save the disk space. Default: -1, which means unlimited.
sync_buffer (bool) – Whether to synchronize buffers in different gpus. Default: False.
- class otx.algorithms.common.adapters.mmcv.CompressionHook(compression_ctrl=None)[source]#
Bases:
Hook
CompressionHook.
- class otx.algorithms.common.adapters.mmcv.CustomEvalHook(*args, ema_eval_start_epoch=10, **kwargs)[source]#
Bases:
EvalHook
Custom Evaluation hook for the OTX.
- Parameters:
dataloader (DataLoader) – A PyTorch dataloader.
interval (int) – Evaluation interval (by epochs). Default: 1.
- class otx.algorithms.common.adapters.mmcv.EMAMomentumUpdateHook(end_momentum: float = 1.0, update_interval: int = 1, by_epoch: bool = False)[source]#
Bases:
Hook
Exponential moving average (EMA) momentum update hook for self-supervised methods.
- This hook includes momentum adjustment in self-supervised methods following:
m = 1 - ( 1- m_0) * (cos(pi * k / K) + 1) / 2, k: current step, K: total steps.
- Parameters:
end_momentum – The final momentum coefficient for the target network, defaults to 1.
update_interval – Interval to update new momentum, defaults to 1.
by_epoch – Whether updating momentum by epoch or not, defaults to False.
- class otx.algorithms.common.adapters.mmcv.EarlyStoppingHook(interval: int, metric: str = 'bbox_mAP', rule: str | None = None, patience: int = 5, iteration_patience: int = 500, min_delta_ratio: float = 0.0)[source]#
Bases:
Hook
Cancel training when a metric has stopped improving.
Early Stopping hook monitors a metric quantity and if no improvement is seen for a ‘patience’ number of epochs, the training is cancelled.
- Parameters:
interval – the number of intervals for checking early stop. The interval number should be the same as the evaluation interval - the interval variable set in evaluation config.
metric – the metric name to be monitored
rule – greater or less. In less mode, training will stop when the metric has stopped decreasing and in greater mode it will stop when the metric has stopped increasing.
patience – Number of epochs with no improvement after which the training will be reduced. For example, if patience = 2, then we will ignore the first 2 epochs with no improvement, and will only cancel the training after the 3rd epoch if the metric still hasn’t improved then
iteration_patience – Number of iterations must be trained after the last improvement before training stops. The same as patience but the training continues if the number of iteration is lower than iteration_patience This variable makes sure a model is trained enough for some iterations after the last improvement before stopping.
min_delta_ratio – Minimal ratio value to check the best score. If the difference between current and best score is smaller than (current_score * (1-min_delta_ratio)), best score will not be changed.
- after_train_epoch(runner: BaseRunner)[source]#
Called after every training epoch to evaluate the results.
- class otx.algorithms.common.adapters.mmcv.EnsureCorrectBestCheckpointHook[source]#
Bases:
Hook
EnsureCorrectBestCheckpointHook.
This hook makes sure that the ‘best_mAP’ checkpoint points properly to the best model, even if the best model is created in the last epoch.
- class otx.algorithms.common.adapters.mmcv.EpochRunnerWithCancel(*args, **kwargs)[source]#
Bases:
EpochBasedRunner
Simple modification to EpochBasedRunner to allow cancelling the training during an epoch.
A stopping hook should set the runner.should_stop flag to True if stopping is required.
- class otx.algorithms.common.adapters.mmcv.Fp16SAMOptimizerHook(rho=0.05, start_epoch=1, **kwargs)[source]#
Bases:
Fp16OptimizerHook
Sharpness-aware Minimization optimizer hook.
Implemented as OptimizerHook for MMCV Runners - Paper ref: https://arxiv.org/abs/2010.01412 - code ref: davda54/sam
- after_train_iter(runner)[source]#
Perform SAM optimization.
compute current loss (DONE IN model.train_step())
compute current gradient
move param to the approximate local maximum: w + e(w) = w + rho*norm_grad
compute maximum loss
compute SAM gradient on maximum loss
restore parram to original param
update param using SAM gradient
Assuming model.current_batch had been set in model.train_step()
- class otx.algorithms.common.adapters.mmcv.IBLossHook(dst_classes)[source]#
Bases:
Hook
Hook for IB loss.
It passes the number of data per class and current epoch to IB loss class.
Initialize the IBLossHook.
- Parameters:
dst_classes (list) – A list of classes including new_classes to be newly learned
- class otx.algorithms.common.adapters.mmcv.IterBasedRunnerWithCancel(*args, **kwargs)[source]#
Bases:
IterBasedRunner
Runner With Cancel for early-stopping (Iter based).
Simple modification to IterBasedRunner to allow cancelling the training. The cancel training hook should set the runner.should_stop flag to True if stopping is required.
# TODO: Implement cancelling of training via keyboard interrupt signal, instead of should_stop
- class otx.algorithms.common.adapters.mmcv.LossDynamicsTrackingHook(output_path: str, alpha: float = 0.001)[source]#
Bases:
Hook
Tracking loss dynamics during training and export it to Datumaro dataset format.
- after_train_iter(runner)[source]#
Accumulate training loss dynamics.
It should be here because it needs to access the training iteration.
- class otx.algorithms.common.adapters.mmcv.MemCacheHook[source]#
Bases:
Hook
Memory cache hook for logging and freezing MemCacheHandler.
- class otx.algorithms.common.adapters.mmcv.NoBiasDecayHook[source]#
Bases:
Hook
Hook for No Bias Decay Method (Bag of Tricks for Image Classification).
This hook divides model’s weight & bias to 3 parameter groups [weight with decay, weight without decay, bias without decay].
- class otx.algorithms.common.adapters.mmcv.OTXLoggerHook(curves: Dict[Any, Curve] | None = None, interval: int = 10, ignore_last: bool = True, reset_flag: bool = True, by_epoch: bool = True)[source]#
Bases:
LoggerHook
OTXLoggerHook for Logging.
- class otx.algorithms.common.adapters.mmcv.OTXProgressHook(time_monitor: TimeMonitorCallback, verbose: bool = False)[source]#
Bases:
Hook
OTXProgressHook for getting progress.
- property progress#
Getting Progress from time monitor.
- class otx.algorithms.common.adapters.mmcv.ReduceLROnPlateauLrUpdaterHook(min_lr: float, interval: int, metric: str = 'bbox_mAP', rule: str | None = None, factor: float = 0.1, patience: int = 3, iteration_patience: int = 300, **kwargs)[source]#
Bases:
LrUpdaterHook
Reduce learning rate when a metric has stopped improving.
Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This scheduler reads a metrics quantity and if no improvement is seen for a ‘patience’ number of epochs, the learning rate is reduced.
- Parameters:
min_lr – minimum learning rate. The lower bound of the desired learning rate.
interval – the number of intervals for checking the hook. The interval number should be the same as the evaluation interval - the interval variable set in evaluation config.
metric – the metric name to be monitored
rule – greater or less. In less mode, learning rate will be dropped if the metric has stopped decreasing and in greater mode it will be dropped when the metric has stopped increasing.
patience – Number of epochs with no improvement after which learning rate will be reduced. For example, if patience = 2, then we will ignore the first 2 epochs with no improvement, and will only drop LR after the 3rd epoch if the metric still hasn’t improved then
iteration_patience – Number of iterations must be trained after the last improvement before LR drops. The same as patience but the LR remains the same if the number of iteration is lower than iteration_patience. This variable makes sure a model is trained enough for some iterations after the last improvement before dropping the LR.
factor – Factor to be multiply with the learning rate. For example, new_lr = current_lr * factor
- after_each_n_epochs(runner: BaseRunner, interval: int) bool [source]#
Check whether current epoch is a next epoch after multiples of interval.
- class otx.algorithms.common.adapters.mmcv.SAMOptimizerHook(rho=0.05, start_epoch=1, **kwargs)[source]#
Bases:
OptimizerHook
Sharpness-aware Minimization optimizer hook.
Implemented as OptimizerHook for MMCV Runners - Paper ref: https://arxiv.org/abs/2010.01412 - code ref: davda54/sam
- after_train_iter(runner)[source]#
Perform SAM optimization.
compute current loss (DONE IN model.train_step())
compute current gradient
move param to the approximate local maximum: w + e(w) = w + rho*norm_grad
compute maximum loss
compute SAM gradient on maximum loss
restore parram to original param
update param using SAM gradient
Assuming model.current_batch had been set in model.train_step()
- class otx.algorithms.common.adapters.mmcv.SemiSLClsHook(total_steps=0, unlabeled_warmup=True)[source]#
Bases:
Hook
Hook for SemiSL for classification.
- This hook includes unlabeled warm-up loss coefficient (default: True):
unlabeled_coef = (0.5 - cos(min(pi, 2 * pi * k) / K)) / 2 k: current step, K: total steps
Also, this hook adds semi-sl-related data to the log (unlabeled_coef, pseudo_label)
- Parameters:
total_steps (int) – total steps for training (iteration) Raise the coefficient from 0 to 1 during half the duration of total_steps default: 0, use runner.max_iters
unlabeled_warmup (boolean) – enable unlabeled warm-up loss coefficient If False, Semi-SL uses 1 as unlabeled loss coefficient
- class otx.algorithms.common.adapters.mmcv.StopLossNanTrainingHook[source]#
Bases:
Hook
StopLossNanTrainingHook.
- class otx.algorithms.common.adapters.mmcv.TwoCropTransformHook(interval: int = 1, by_epoch: bool = False)[source]#
Bases:
Hook
TwoCropTransformHook with every specific interval.
This hook decides whether using single pipeline or two pipelines implemented in TwoCropTransform for the current iteration.
- Parameters:
- otx.algorithms.common.adapters.mmcv.multi_scale_deformable_attn_pytorch(value: Tensor, value_spatial_shapes: Tensor, sampling_locations: Tensor, attention_weights: Tensor) Tensor [source]#
Custom patch for multi_scale_deformable_attn_pytorch function.
Original implementation in mmcv.ops use torch.nn.functional.grid_sample. It raises errors during inference with OpenVINO exported model. Therefore this function change grid_sample function to _custom_grid_sample function.