otx.algorithms.common.adapters.torch.utils#
Utils for modules using torch.
Functions
|
Check a model comes from timm module. |
|
Convert BatchNorm layers to SyncBatchNorm layers. |
|
Syncs the BatchNorm layers in a model to use regular BatchNorm layers. |
Classes
|
Algorithm class to find optimal batch size. |
- class otx.algorithms.common.adapters.torch.utils.BsSearchAlgo(train_func: Callable[[int], None], default_bs: int, max_bs: int)[source]#
Bases:
object
Algorithm class to find optimal batch size.
- Parameters:
- auto_decrease_batch_size() int [source]#
Decrease batch size if default batch size isn’t fit to current GPU device.
- Returns:
Proper batch size possibly decreased as default value isn’t fit
- Return type:
- find_big_enough_batch_size(drop_last: bool = False) int [source]#
Find a big enough batch size.
This function finds a big enough batch size by training with various batch sizes. It estimate a batch size using equation is estimated using training history. The reason why using the word “big enough” is that it tries to find not maxmium but big enough value which uses GPU memory between lower and upper bound.
- Parameters:
drop_last (bool) – Whether to drop the last incomplete batch.
- Raises:
RuntimeError – If training with batch size 2 can’t be run, raise an error.
- Returns:
Big enough batch size.
- Return type:
- otx.algorithms.common.adapters.torch.utils.convert_sync_batchnorm(model: Module)[source]#
Convert BatchNorm layers to SyncBatchNorm layers.
- Parameters:
model (Module) – model containing batchnorm layers.