otx.core.model.base#
Class definition for base model entity used in OTX.
Functions
|
|
|
Classes
|
Base class for the models used in OTX. |
|
Base class for the OpenVINO model. |
- class otx.core.model.base.OTXModel(label_info: LabelInfoTypes, input_size: tuple[int, int] | None = None, optimizer: OptimizerCallable = <function _default_optimizer_callable>, scheduler: LRSchedulerCallable | LRSchedulerListCallable = <function _default_scheduler_callable>, metric: MetricCallable = <function _null_metric_callable>, torch_compile: bool = False, tile_config: TileConfig = TileConfig(enable_tiler=False, enable_adaptive_tiling=True, tile_size=(400, 400), overlap=0.2, iou_threshold=0.45, max_num_instances=1500, object_tile_ratio=0.03, sampling_ratio=1.0, with_full_img=False), train_type: Literal[OTXTrainType.SUPERVISED, OTXTrainType.SEMI_SUPERVISED] = OTXTrainType.SUPERVISED)[source]#
Bases:
LightningModule
,Generic
[T_OTXBatchDataEntity
,T_OTXBatchPredEntity
]Base class for the models used in OTX.
- Parameters:
num_classes – Number of classes this model can predict.
- explain_mode#
If true, self.predict_step() will produce a XAI output as well
- input_size_multiplier#
multiplier value for input size a model requires. If input_size isn’t multiple of this value, error is raised.
- Type:
- configure_optimizers() OptimizerLRScheduler [source]#
Configure an optimizer and learning-rate schedulers.
Configure an optimizer and learning-rate schedulers from the given optimizer and scheduler or scheduler list callable in the constructor. Generally, there is two lr schedulers. One is for a linear warmup scheduler and the other is the main scheduler working after the warmup period.
- Returns:
Two list. The former is a list that contains an optimizer The latter is a list of lr scheduler configs which has a dictionary format.
- export(output_dir: Path, base_name: str, export_format: OTXExportFormatType, precision: OTXPrecisionType = OTXPrecisionType.FP32, to_exportable_code: bool = False) Path [source]#
Export this model to the specified output directory.
- Parameters:
output_dir (Path) – directory for saving the exported model
base_name – (str): base name for the exported model file. Extension is defined by the target export format
export_format (OTXExportFormatType) – format of the output model
precision (OTXExportPrecisionType) – precision of the output model
to_exportable_code (bool) – flag to export model in exportable code with demo package
- Returns:
path to the exported model.
- Return type:
Path
- forward(inputs: T_OTXBatchDataEntity) T_OTXBatchPredEntity | OTXBatchLossEntity [source]#
Model forward function.
- forward_explain(inputs: T_OTXBatchDataEntity) T_OTXBatchPredEntity [source]#
Model forward explain function.
- forward_for_tracing(*args, **kwargs) Tensor | dict[str, Tensor] [source]#
Model forward function used for the model tracing during model exportation.
- forward_tiles(inputs: OTXTileBatchDataEntity[T_OTXBatchDataEntity]) T_OTXBatchPredEntity | OTXBatchLossEntity [source]#
Model forward function for tile task.
- static get_ckpt_label_info_v1(ckpt: dict) LabelInfo [source]#
Generate label info from OTX v1 checkpoint.
- get_dummy_input(batch_size: int = 1) OTXBatchDataEntity[Any] [source]#
Generates a dummy input, suitable for launching forward() on it.
- Parameters:
batch_size (int, optional) – number of elements in a dummy input sequence. Defaults to 1.
- Returns:
An entity containing randomly generated inference data.
- Return type:
OTXBatchDataEntity[Any]
- load_from_otx_v1_ckpt(ckpt: dict[str, Any]) dict [source]#
Load the previous OTX ckpt according to OTX2.0.
- load_state_dict(ckpt: dict[str, Any], *args, **kwargs) None [source]#
Load state dictionary from checkpoint state dictionary.
It successfully loads the checkpoint from OTX v1.x and for finetune and for resume.
If checkpoint’s label_info and OTXLitModule’s label_info are different, load_state_pre_hook for smart weight loading will be registered.
- load_state_dict_incrementally(ckpt: dict[str, Any], *args, **kwargs) None [source]#
Load state dict incrementally.
- load_state_dict_pre_hook(state_dict: dict[str, Tensor], prefix: str, *args, **kwargs) None [source]#
Modify input state_dict according to class name matching before weight loading.
- lr_scheduler_step(scheduler: LRSchedulerTypeUnion, metric: Tensor) None [source]#
It is required to prioritize the warmup lr scheduler than other lr scheduler during a warmup period.
It will ignore other lr scheduler’s stepping if the warmup scheduler is currently activated.
- static map_class_names(src_classes: list[str], dst_classes: list[str]) list[int] [source]#
Computes src to dst index mapping.
src2dst[src_idx] = dst_idx # according to class name matching, -1 for non-matched ones assert(len(src2dst) == len(src_classes)) ex)
src_classes = [‘person’, ‘car’, ‘tree’] dst_classes = [‘tree’, ‘person’, ‘sky’, ‘ball’] -> Returns src2dst = [1, -1, 0]
- optimize(output_dir: Path, data_module: OTXDataModule, ptq_config: dict[str, Any] | None = None) Path [source]#
Runs quantization of the model with NNCF.PTQ on the passed data. Works only for OpenVINO models.
PTQ performs int-8 quantization on the input model, so the resulting model comes in mixed precision (some operations, however, remain in FP32).
- Parameters:
output_dir (Path) – working directory to save the optimized model.
data_module (OTXDataModule) – dataset for calibration of quantized layers.
- Returns:
path to the resulting optimized OpenVINO model.
- Return type:
Path
- patch_optimizer_and_scheduler_for_hpo() None [source]#
Patch optimizer and scheduler for hyperparameter optimization and adaptive batch size.
This is inplace function changing inner states (optimizer_callable and scheduler_callable). Both will be changed to be picklable. In addition, optimizer_callable is changed to make its hyperparameters gettable.
- predict_step(batch: T_OTXBatchDataEntity, batch_idx: int, dataloader_idx: int = 0) T_OTXBatchPredEntity [source]#
Step function called during PyTorch Lightning Trainer’s predict.
- register_load_state_dict_pre_hook(model_classes: list[str], ckpt_classes: list[str]) None [source]#
Register load_state_dict_pre_hook.
- setup(stage: str) None [source]#
Lightning hook that is called at the beginning of fit (train + validate), validate, test, or predict.
This is a good hook when you need to build models dynamically or adjust something about them. This hook is called on every process when using DDP.
- Parameters:
stage – Either “fit”, “validate”, “test”, or “predict”.
- test_step(batch: T_OTXBatchDataEntity, batch_idx: int) None [source]#
Perform a single test step on a batch of data from the test set.
- Parameters:
batch – A batch of data (a tuple) containing the input tensor of images and target labels.
batch_idx – The index of the current batch.
- training_step(batch: T_OTXBatchDataEntity, batch_idx: int) Tensor | None [source]#
Step for model training.
- validation_step(batch: T_OTXBatchDataEntity, batch_idx: int) None [source]#
Perform a single validation step on a batch of data from the validation set.
- Parameters:
batch – A batch of data (a tuple) containing the input tensor of images and target labels.
batch_idx – The index of the current batch.
- property metric: Metric | MetricCollection#
Metric module for this OTX model.
- property num_classes: int#
Returns model’s number of classes. Can be redefined at the model’s level.
- property tile_config: TileConfig#
Get tiling configurations.
- class otx.core.model.base.OVModel(model_name: str, model_type: str, async_inference: bool = True, force_cpu: bool = True, max_num_requests: int | None = None, use_throughput_mode: bool = True, model_api_configuration: dict[str, Any] | None = None, metric: MetricCallable = <function _null_metric_callable>, **kwargs)[source]#
Bases:
OTXModel
,Generic
[T_OTXBatchDataEntity
,T_OTXBatchPredEntity
]Base class for the OpenVINO model.
This is a base class representing interface for interacting with OpenVINO Intermediate Representation (IR) models. OVModel can create and validate OpenVINO IR model directly from provided path locally or from OpenVINO OMZ repository. (Only PyTorch models are supported). OVModel supports synchronous as well as asynchronous inference type.
- Parameters:
num_classes – Number of classes this model can predict.
- export(output_dir: Path, base_name: str, export_format: OTXExportFormatType, precision: OTXPrecisionType = OTXPrecisionType.FP32, to_exportable_code: bool = True) Path [source]#
Export this model to the specified output directory.
- Parameters:
output_dir (Path) – directory for saving the exported model
base_name – (str): base name for the exported model file. Extension is defined by the target export format
export_format (OTXExportFormatType) – format of the output model
precision (OTXExportPrecisionType) – precision of the output model
to_exportable_code (bool) – whether to generate exportable code with demo package. OpenVINO model supports only exportable code option.
- Returns:
path to the exported model.
- Return type:
Path
- forward_explain(inputs: T_OTXBatchDataEntity) T_OTXBatchPredEntity [source]#
Model forward explain function.
- get_dummy_input(batch_size: int = 1) OTXBatchDataEntity [source]#
Returns a dummy input for base OV model.