otx.api.utils.tiler#
Tiling Module.
Classes
|
Tile Image into (non)overlapping Patches. |
- class otx.api.utils.tiler.Tiler(tile_size: int, overlap: float, max_number: int, detector: Model, classifier: ImageModel | None = None, segm: bool = False, mode: str = 'async', num_classes: int = 0)[source]#
Bases:
object
Tile Image into (non)overlapping Patches. Images are tiled in order to efficiently process large images.
- Parameters:
tile_size – Tile dimension for each patch
overlap – Overlap between adjacent tile
max_number – max number of prediction per image
detector – OpenVINO adaptor model
classifier – Tile classifier OpenVINO adaptor model
segm – enable instance segmentation mask output
mode – async or sync mode
- crop_tile(image: ndarray, coord: List[int]) ndarray [source]#
Crop tile from full image.
- Parameters:
image (np.ndarray) – full-res image
coord (List) – tile coordinates
- Returns:
cropped tile
- Return type:
np.ndarray
- static detection2tuple(detections: ndarray)[source]#
Convert detection to tuple.
- Parameters:
detections (np.ndarray) – prediction results in numpy array
- Returns:
scores between 0-1 labels (np.ndarray): label indices boxes (np.ndarray): boxes
- Return type:
scores (np.ndarray)
- filter_tiles_by_objectness(image: ndarray, tile_coords: List[List[int]], confidence_threshold: float = 0.35)[source]#
Filter tiles by objectness score by running tile classifier.
- Parameters:
image (np.ndarray) – full size image
tile_coords (List[List[int]]) – tile coordinates
- Returns:
tile coordinates to keep
- Return type:
keep_coords
- get_tiling_saliency_map_from_segm_masks(detections: Tuple | ndarray) List [source]#
Post process function for saliency map of OTX MaskRCNN model for tiling.
- merge_features(features: List, predictions: Tuple | ndarray) Tuple[None, None] | List[ndarray] [source]#
Merge tile-level feature vectors to image-level features.
- Parameters:
features – tile-level features.
predictions – predictions with masks for whole image.
- Returns:
Merged feature vector for entire image. image_saliency_map (List): Merged saliency map for entire image
- Return type:
image_vector (np.ndarray)
- merge_maps(features: List) ndarray [source]#
Merge tile-level saliency maps to image-level saliency map.
- Parameters:
features – tile-level features ((vector, map: np.array), tile_meta).
detected. (Each saliency map is a list of maps for each detected class or None if class wasn't) –
- Returns:
Merged saliency maps for entire image.
- Return type:
merged_maps (np.ndarray)
- merge_results(results: List[Dict], shape: List[int])[source]#
Merge results from tiles.
- Parameters:
results (List[Dict]) – list of tile results
shape (List[int]) – original full-res image shape
- merge_vectors(features: List) ndarray [source]#
Merge tile-level feature vectors to image-level feature vector.
- Parameters:
features – tile-level features.
- Returns:
Merged vectors for entire image.
- Return type:
merged_vectors (np.ndarray)
- postprocess_tile(predictions: DetectionResult, offset_x: int, offset_y: int) Dict[str, List] [source]#
Postprocess single tile prediction.
- predict(image: ndarray, mode: str = 'async')[source]#
Predict by cropping full image to tiles.
- Parameters:
image (np.ndarray) – full size image
- Returns:
prediction results features: saliency map and feature vector
- Return type:
detection
- predict_async(image: ndarray, tile_coords: List[List[int]])[source]#
Predict by cropping full image to tiles asynchronously.
- Parameters:
image (np.ndarray) – full size image
tile_coords (List[List[int]]) – tile coordinates
- Returns:
prediction results features: saliency map and feature vector
- Return type:
detection
- predict_sync(image: ndarray, tile_coords: List[List[int]])[source]#
Predict by cropping full image to tiles synchronously.
- Parameters:
image (np.ndarray) – full size image
tile_coords (List[List[int]]) – tile coordinates
- Returns:
prediction results features: saliency map and feature vector
- Return type:
detection