otx.algorithms.common.utils#

Collection of utils to run common OTX algorithms.

Functions

embed_ir_model_data(xml_file, data_items)

Embeds serialized data to IR xml file.

get_cls_img_indices(labels, dataset)

Function for getting image indices per class.

get_old_new_img_indices(labels, new_classes, ...)

Function for getting old & new indices of dataset.

load_template(path)

Loading model template function.

get_task_class(path)

Return Task classes.

get_arg_spec(fn[, depth])

Get argument spec of function.

get_image(results, cache_dir[, to_float32])

Load an image and cache it if it's a training video frame.

set_random_seed(seed[, logger, deterministic])

Set random seed.

append_dist_rank_suffix(file_name)

Append distributed training rank suffix to the file name.

read_py_config(filename)

Reads py config to a dict.

get_default_async_reqs_num()

Returns a default number of infer request for OV models.

is_xpu_available()

Checks if XPU device is available.

is_hpu_available()

Check if HPU device is available.

cast_bf16_to_fp32(tensor)

Cast bf16 tensor to fp32 before processed by numpy.

get_cfg_based_on_device(cfg_file_path)

Find a config file according to device.

Classes

TrainingProgressCallback(...)

TrainingProgressCallback class for time monitoring.

InferenceProgressCallback(num_test_steps, ...)

InferenceProgressCallback class for time monitoring.

OptimizationProgressCallback(...[, ...])

Progress callback used for optimization using NNCF.

UncopiableDefaultDict

Defauldict type object to avoid deepcopy.

OTXOpenVinoDataLoader(dataset, inferencer[, ...])

DataLoader implementation for ClassificationOpenVINOTask.

class otx.algorithms.common.utils.InferenceProgressCallback(num_test_steps, update_progress_callback, **kwargs)[source]#

Bases: TimeMonitorCallback

InferenceProgressCallback class for time monitoring.

on_test_batch_end(batch=None, logs=None)[source]#

Callback function on testing batch ended.

class otx.algorithms.common.utils.OTXOpenVinoDataLoader(dataset: DatasetEntity, inferencer: Any, shuffle: bool = True)[source]#

Bases: object

DataLoader implementation for ClassificationOpenVINOTask.

class otx.algorithms.common.utils.OptimizationProgressCallback(update_progress_callback, loading_stage_progress_percentage: int = 5, initialization_stage_progress_percentage: int = 5, **kwargs)[source]#

Bases: TrainingProgressCallback

Progress callback used for optimization using NNCF.

There are three stages to the progress bar:
  • 5 % model is loaded

  • 10 % compressed model is initialized

  • 10-100 % compressed model is being fine-tuned

on_initialization_end()[source]#

on_initialization_end callback for optimization using NNCF.

on_train_begin(logs=None)[source]#

Callback function when training beginning.

on_train_end(logs=None)[source]#

Callback function on training ended.

class otx.algorithms.common.utils.TrainingProgressCallback(update_progress_callback, **kwargs)[source]#

Bases: TimeMonitorCallback

TrainingProgressCallback class for time monitoring.

on_epoch_end(epoch, logs=None)[source]#

Callback function on epoch ended.

on_train_batch_end(batch, logs=None)[source]#

Callback function on training batch ended.

class otx.algorithms.common.utils.UncopiableDefaultDict[source]#

Bases: defaultdict

Defauldict type object to avoid deepcopy.

otx.algorithms.common.utils.append_dist_rank_suffix(file_name: str | Path) str[source]#

Append distributed training rank suffix to the file name.

otx.algorithms.common.utils.cast_bf16_to_fp32(tensor: Tensor) Tensor[source]#

Cast bf16 tensor to fp32 before processed by numpy.

numpy doesn’t support bfloat16, it is required to convert bfloat16 tensor to float32.

otx.algorithms.common.utils.embed_ir_model_data(xml_file: str, data_items: Dict[Tuple[str, str], Any]) None[source]#

Embeds serialized data to IR xml file.

Parameters:
  • xml_file – a path to IR xml file.

  • data_items – a dict with tuple-keyworded serialized objects.

otx.algorithms.common.utils.get_arg_spec(fn: Callable, depth: int | None = None) Tuple[str, ...][source]#

Get argument spec of function.

otx.algorithms.common.utils.get_cfg_based_on_device(cfg_file_path: str | Path) str[source]#

Find a config file according to device.

otx.algorithms.common.utils.get_cls_img_indices(labels, dataset)[source]#

Function for getting image indices per class.

Parameters:
otx.algorithms.common.utils.get_default_async_reqs_num() int[source]#

Returns a default number of infer request for OV models.

otx.algorithms.common.utils.get_image(results: Dict[str, Any], cache_dir: str, to_float32=False) ndarray[source]#

Load an image and cache it if it’s a training video frame.

Parameters:
  • results (Dict[str, Any]) – A dictionary that contains information about the dataset item.

  • cache_dir (str) – A directory path where the cached images will be stored.

  • to_float32 (bool, optional) – A flag indicating whether to convert the image to float32. Defaults to False.

Returns:

The loaded image.

Return type:

np.ndarray

otx.algorithms.common.utils.get_old_new_img_indices(labels, new_classes, dataset)[source]#

Function for getting old & new indices of dataset.

Parameters:
  • labels (List[LabelEntity]) – List of labels

  • new_classes (List[str]) – List of new classes

  • dataset (DatasetEntity) – dataset entity

otx.algorithms.common.utils.get_task_class(path: str)[source]#

Return Task classes.

otx.algorithms.common.utils.is_hpu_available() bool[source]#

Check if HPU device is available.

otx.algorithms.common.utils.is_xpu_available() bool[source]#

Checks if XPU device is available.

otx.algorithms.common.utils.load_template(path)[source]#

Loading model template function.

otx.algorithms.common.utils.read_py_config(filename: str) Dict[source]#

Reads py config to a dict.

otx.algorithms.common.utils.set_random_seed(seed, logger=None, deterministic=False)[source]#

Set random seed.

Parameters:
  • seed (int) – Seed to be used.

  • logger (logging.Logger) – logger for logging seed info

  • deterministic (bool) – Whether to set the deterministic option for CUDNN backend, i.e., set torch.backends.cudnn.deterministic to True and torch.backends.cudnn.benchmark to False. Default: False.