otx.cli.utils.installation#

OTX installation util functions.

Functions

add_hardware_suffix_to_torch(requirement[, ...])

Add hardware suffix to the torch requirement.

get_cuda_suffix(cuda_version)

Get CUDA suffix for PyTorch or mmX versions.

get_cuda_version()

Get CUDA version installed on the system.

get_hardware_suffix([...])

Get hardware suffix for PyTorch or mmX versions.

get_mmcv_install_args(torch_requirement, ...)

Get the install arguments for MMCV.

get_module_version(module_name)

Return the version of the specified Python module.

get_requirements([module])

Get requirements of module from importlib.metadata.

get_torch_install_args(requirement)

Get the install arguments for Torch requirement.

mim_installation(requirements)

Installing libraries with mim api.

parse_requirements(requirements)

Parse requirements and returns torch, mmcv and task requirements.

patch_mmaction2()

Patch MMAction2==1.2.0 with the custom code.

update_cuda_version_with_available_torch_cuda_build(...)

Update the installed CUDA version with the highest supported CUDA version by PyTorch.

otx.cli.utils.installation.add_hardware_suffix_to_torch(requirement: Requirement, hardware_suffix: str | None = None, with_available_torch_build: bool = False) str[source]#

Add hardware suffix to the torch requirement.

Parameters:
  • requirement (Requirement) – Requirement object comprising requirement details.

  • hardware_suffix (str | None) – Hardware suffix. If None, it will be set to the correct hardware suffix. Defaults to None.

  • with_available_torch_build (bool) – To check whether the installed CUDA version is supported by the latest available PyTorch build. Defaults to False.

Examples

>>> from pkg_resources import Requirement
>>> req = "torch>=1.13.0, <=2.0.1"
>>> requirement = Requirement.parse(req)
>>> requirement.name, requirement.specs
('torch', [('>=', '1.13.0'), ('<=', '2.0.1')])
>>> add_hardware_suffix_to_torch(requirement)
'torch>=1.13.0+cu121, <=2.0.1+cu121'

with_available_torch_build=True will use the latest available PyTorch build. >>> req = “torch==2.0.1” >>> requirement = Requirement.parse(req) >>> add_hardware_suffix_to_torch(requirement, with_available_torch_build=True) ‘torch==2.0.1+cu118’

It is possible to pass the hardware_suffix manually. >>> req = “torch==2.0.1” >>> requirement = Requirement.parse(req) >>> add_hardware_suffix_to_torch(requirement, hardware_suffix=”cu121”) ‘torch==2.0.1+cu111’

Raises:

ValueError – When the requirement has more than two version criterion.

Returns:

Updated torch package with the right cuda suffix.

Return type:

str

otx.cli.utils.installation.get_cuda_suffix(cuda_version: str) str[source]#

Get CUDA suffix for PyTorch or mmX versions.

Parameters:

cuda_version (str) – CUDA version installed on the system.

Note

The CUDA version of PyTorch is not always the same as the CUDA version

that is installed on the system. For example, the latest PyTorch version (1.10.0) supports CUDA 11.3, but the latest CUDA version that is available for download is 11.2. Therefore, we need to use the latest available CUDA version for PyTorch instead of the CUDA version that is installed on the system. Therefore, this function shoudl be regularly updated to reflect the latest available CUDA.

Examples

>>> get_cuda_suffix(cuda_version="11.2")
"cu112"
>>> get_cuda_suffix(cuda_version="11.8")
"cu118"
Returns:

CUDA suffix for PyTorch or mmX version.

Return type:

str

otx.cli.utils.installation.get_cuda_version() str | None[source]#

Get CUDA version installed on the system.

Examples

>>> # Assume that CUDA version is 11.2
>>> get_cuda_version()
"11.2"
>>> # Assume that CUDA is not installed on the system
>>> get_cuda_version()
None
Returns:

CUDA version installed on the system.

Return type:

str | None

otx.cli.utils.installation.get_hardware_suffix(with_available_torch_build: bool = False, torch_version: str | None = None) str[source]#

Get hardware suffix for PyTorch or mmX versions.

Parameters:
  • with_available_torch_build (bool) – Whether to use the latest available PyTorch build or not. If True, the latest available PyTorch build will be used. If False, the installed PyTorch build will be used. Defaults to False.

  • torch_version (str | None) – PyTorch version. This is only used when the with_available_torch_build is True.

Examples

>>> # Assume that CUDA version is 11.2
>>> get_hardware_suffix()
"cu112"
>>> # Assume that CUDA is not installed on the system
>>> get_hardware_suffix()
"cpu"

Assume that that installed CUDA version is 12.1. However, the latest available CUDA version for PyTorch v2.0 is 11.8. Therefore, we use 11.8 instead of 12.1. This is because PyTorch does not support CUDA 12.1 yet. In this case, we could correct the CUDA version by setting with_available_torch_build to True.

>>> cuda_version = get_cuda_version()
"12.1"
>>> get_hardware_suffix(with_available_torch_build=True, torch_version="2.0.1")
"cu118"
Returns:

Hardware suffix for PyTorch or mmX version.

Return type:

str

otx.cli.utils.installation.get_mmcv_install_args(torch_requirement: str | Requirement, mmcv_requirements: list[str]) list[str][source]#

Get the install arguments for MMCV.

Parameters:
  • torch_requirement (str | Requirement) – Torch requirement.

  • mmcv_requirements (list[str]) – MMCV requirements.

Raises:
Returns:

List of mmcv install arguments.

Return type:

list[str]

otx.cli.utils.installation.get_module_version(module_name: str) str | None[source]#

Return the version of the specified Python module.

Parameters:

module_name (str) – The name of the module to get the version of.

Returns:

The version of the module, or None if the module is not installed.

Return type:

str | None

otx.cli.utils.installation.get_requirements(module: str = 'otx') dict[str, list[Requirement]][source]#

Get requirements of module from importlib.metadata.

This function returns list of required packages from importlib_metadata.

Example

>>> get_requirements("otx")
{
    "api": ["attrs>=21.2.0", ...],
    "anomaly": ["anomalib==0.5.1", ...],
    ...
}
Returns:

List of required packages for each optional-extras.

Return type:

dict[str, list[Requirement]]

otx.cli.utils.installation.get_torch_install_args(requirement: str | Requirement) list[str][source]#

Get the install arguments for Torch requirement.

This function will return the install arguments for the Torch requirement and its corresponding torchvision requirement.

Parameters:

requirement (str | Requirement) – The torch requirement.

Raises:

RuntimeError – If the OS is not supported.

Example

>>> from pkg_resources import Requirement
>>> requriment = "torch>=1.13.0"
>>> get_torch_install_args(requirement)
['--extra-index-url', 'https://download.pytorch.org/whl/cpu',
'torch==1.13.0+cpu', 'torchvision==0.14.0+cpu']
Returns:

The install arguments.

Return type:

list[str]

otx.cli.utils.installation.mim_installation(requirements: list[str]) int[source]#

Installing libraries with mim api.

Parameters:

requirements (list[str]) – List of MMCV-related libraries.

Raises:

ModuleNotFoundError – Raise an error if mim import is not possible.

otx.cli.utils.installation.parse_requirements(requirements: list[Requirement]) tuple[str, list[str], list[str]][source]#

Parse requirements and returns torch, mmcv and task requirements.

Parameters:

requirements (list[Requirement]) – List of requirements.

Raises:

ValueError – If torch requirement is not found.

Examples

>>> requirements = [
...     Requirement.parse("torch==1.13.0"),
...     Requirement.parse("mmcv-full==1.7.0"),
...     Requirement.parse("mmcls==0.12.0"),
...     Requirement.parse("onnx>=1.8.1"),
... ]
>>> parse_requirements(requirements=requirements)
(Requirement.parse("torch==1.13.0"),
[Requirement.parse("mmcv-full==1.7.0"), Requirement.parse("mmcls==0.12.0")],
Requirement.parse("onnx>=1.8.1"))
Returns:

Tuple of torch, mmcv and other requirements.

Return type:

tuple[str, list[str], list[str]]

otx.cli.utils.installation.patch_mmaction2() None[source]#

Patch MMAction2==1.2.0 with the custom code.

The patch is at src/otx/cli/patches/mmaction2.patch. The reason why we need is that __init__.py is missing in open-mmlab/mmaction2

otx.cli.utils.installation.update_cuda_version_with_available_torch_cuda_build(cuda_version: str, torch_version: str) str[source]#

Update the installed CUDA version with the highest supported CUDA version by PyTorch.

Parameters:
  • cuda_version (str) – The installed CUDA version.

  • torch_version (str) – The PyTorch version.

Raises:

Warning – If the installed CUDA version is not supported by PyTorch.

Examples

>>> update_cuda_version_with_available_torch_cuda_builds("11.1", "1.13.0")
"11.6"
>>> update_cuda_version_with_available_torch_cuda_builds("11.7", "1.13.0")
"11.7"
>>> update_cuda_version_with_available_torch_cuda_builds("11.8", "1.13.0")
"11.7"
>>> update_cuda_version_with_available_torch_cuda_builds("12.1", "2.0.1")
"11.8"
Returns:

The updated CUDA version.

Return type:

str