otx.core.config.data#

Config data type objects for data.

Classes

SamplerConfig(class_path, init_args, ...)

Configuration class for defining the sampler used in the data loading process.

SubsetConfig(batch_size, subset_name, ...)

DTO for dataset subset configuration.

TileConfig([enable_tiler, ...])

DTO for tiler configuration.

UnlabeledDataConfig(batch_size, subset_name, ...)

DTO for unlabeled data.

VisualPromptingConfig([use_bbox, use_point])

DTO for visual prompting data module configuration.

class otx.core.config.data.SamplerConfig(class_path: str = 'torch.utils.data.RandomSampler', init_args: dict[str, ~typing.Any] = <factory>)[source]#

Bases: object

Configuration class for defining the sampler used in the data loading process.

This is passed in the form of a dataclass, which is instantiated when the dataloader is created.

[TODO]: Need to replace this with a proper Sampler class. Currently, SamplerConfig, which belongs to the sampler of SubsetConfig, belongs to the nested dataclass of dataclass, which is not easy to instantiate from the CLI. So currently replace sampler with a corresponding dataclass that resembles the configuration of another object, providing limited functionality.

class otx.core.config.data.SubsetConfig(batch_size: int, subset_name: str, transforms: list[dict[str, ~typing.Any]], transform_lib_type: ~otx.core.types.transformer_libs.TransformLibType = TransformLibType.TORCHVISION, num_workers: int = 2, sampler: ~otx.core.config.data.SamplerConfig = <factory>, to_tv_image: bool = True, input_size: ~typing.Any | None = None)[source]#

Bases: object

DTO for dataset subset configuration.

batch_size#

Batch size produced.

Type:

int

subset_name#

Datumaro Dataset’s subset name for this subset config. It can differ from the actual usage (e.g., ‘val’ for the validation subset config).

Type:

str

transforms#

List of actually used transforms. It accepts a list of torchvision.transforms.v2.* Python objects or torchvision.transforms.v2.Compose for TransformLibType.TORCHVISION. Otherwise, it takes a Python dictionary that fits the configuration style used in mmcv (TransformLibType.MMCV, TransformLibType.MMPRETRAIN, …).

Type:

list[dict[str, Any] | Transform] | Compose

transform_lib_type#

Transform library type used by this subset.

Type:

TransformLibType

num_workers#

Number of workers for the dataloader of this subset.

Type:

int

input_size#

input size model expects. If $(input_size) exists in transforms, it will be replaced with this value.

Type:

int | tuple[int, int] | None

Example

```python train_subset_config = SubsetConfig(

batch_size=64, subset_name=”train”, transforms=v2.Compose(

[

v2.RandomResizedCrop(size=(224, 224), antialias=True), v2.RandomHorizontalFlip(p=0.5), v2.ToDtype(torch.float32, scale=True), v2.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),

],

) transform_lib_type=TransformLibType.TORCHVISION, num_workers=2,

)#

class otx.core.config.data.TileConfig(enable_tiler: bool = False, enable_adaptive_tiling: bool = True, tile_size: tuple[int, int] = (400, 400), overlap: float = 0.2, iou_threshold: float = 0.45, max_num_instances: int = 1500, object_tile_ratio: float = 0.03, sampling_ratio: float = 1.0, with_full_img: bool = False)[source]#

Bases: object

DTO for tiler configuration.

clone() TileConfig[source]#

Return a deep copied one of this instance.

class otx.core.config.data.UnlabeledDataConfig(batch_size: int = 0, subset_name: str = 'unlabeled', transforms: dict[str, list[dict[str, ~typing.Any]]] = <factory>, transform_lib_type: ~otx.core.types.transformer_libs.TransformLibType = TransformLibType.TORCHVISION, num_workers: int = 2, sampler: ~otx.core.config.data.SamplerConfig = <factory>, to_tv_image: bool = True, input_size: ~typing.Any | None = None, data_root: str | None = None, data_format: str = 'image_dir')[source]#

Bases: SubsetConfig

DTO for unlabeled data.

class otx.core.config.data.VisualPromptingConfig(use_bbox: bool = False, use_point: bool = False)[source]#

Bases: object

DTO for visual prompting data module configuration.