otx.core.schedulers#

Custom schedulers for the OTX2.0.

Classes

LinearWarmupScheduler(optimizer[, ...])

Linear Warmup scheduler.

LinearWarmupSchedulerCallable(...[, ...])

This callable can create the given main LR scheduler and LinearWarmupScheduler at the same time.

SchedulerCallableSupportHPO(scheduler_cls, ...)

LR scheduler callable supports OTX hyper-parameter optimization (HPO) algorithm.

class otx.core.schedulers.LinearWarmupScheduler(optimizer: Optimizer, num_warmup_steps: int = 1000, interval: Literal['step', 'epoch'] = 'step')[source]#

Bases: LambdaLR

Linear Warmup scheduler.

Parameters:
  • num_warmup_steps – Learning rate will linearly increased during the period same as this number.

  • warmup_interval – If “epoch”, count the number of steps for the warmup period. Otherwise, the iteration step will be the warmup period.

step(epoch: int | None = None) None[source]#

Overriding the step to disable the warmup scheduler after n_steps.

property activated: bool#

If true, the current step count is less than the num_warmup_steps.

class otx.core.schedulers.LinearWarmupSchedulerCallable(main_scheduler_callable: LRSchedulerCallable, num_warmup_steps: int = 0, warmup_interval: Literal['step', 'epoch'] = 'step', monitor: str | None = None)[source]#

Bases: object

This callable can create the given main LR scheduler and LinearWarmupScheduler at the same time.

Parameters:
  • main_scheduler_callable – Callable to create a LR scheduler that will be mainly used.

  • num_warmup_steps – Learning rate will linearly increased during the period same as this number. If it is less than equal to zero, do not create LinearWarmupScheduler.

  • warmup_interval – If “epoch”, count the number of steps for the warmup period. Otherwise, the iteration step will be the warmup period.

  • monitor – If given, override the main scheduler’s monitor attribute.

__call__(optimizer: Optimizer) list[LRScheduler | ReduceLROnPlateau][source]#

Create a list of lr schedulers.

class otx.core.schedulers.SchedulerCallableSupportHPO(scheduler_cls: type[LRScheduler] | str, scheduler_kwargs: dict[str, int | float | bool | str])[source]#

Bases: object

LR scheduler callable supports OTX hyper-parameter optimization (HPO) algorithm.

It makes SchedulerCallable pickelable and accessible to parameters. It is used for HPO and adaptive batch size.

Parameters:
  • scheduler_clsLRScheduler class type or string class import path. See examples for details.

  • scheduler_kwargs – Keyword arguments used for the initialization of the given scheduler_cls.

Examples

This is an example to create MobileNetV3ForMulticlassCls with a StepLR lr scheduler and custom configurations.

```python from torch.optim.lr_scheduler import StepLR from otx.algo.classification.mobilenet_v3_large import MobileNetV3ForMulticlassCls

model = MobileNetV3ForMulticlassCls(

num_classes=3, scheduler=SchedulerCallableSupportHPO(

scheduler_cls=StepLR, scheduler_kwargs={

“step_size”: 10, “gamma”: 0.5,

},

),

)#

It can be created from the string class import path such as

```python from otx.algo.classification.mobilenet_v3_large import MobileNetV3ForMulticlassCls

model = MobileNetV3ForMulticlassCls(

num_classes=3, optimizer=SchedulerCallableSupportHPO(

scheduler_cls=”torch.optim.lr_scheduler.StepLR”, scheduler_kwargs={

“step_size”: 10, “gamma”: 0.5,

},

),

)#

__call__(optimizer: Optimizer) LRScheduler[source]#

Create torch.optim.LRScheduler instance for the given parameters.

classmethod from_callable(func: LRSchedulerCallable) SchedulerCallableSupportHPO[source]#

Create this class instance from an existing optimizer callable.