Model configuration#
Model’s static method create_model()
has two overloads. One constructs the model from a string (a path or a model name) and the other takes an already constructed InferenceAdapter
. The first overload configures a created model with values taken from configuration
dict function argument and from model’s intermediate representation (IR) stored in .xml
in model_info
section of rt_info
. Values provided in configuration
have priority over values in IR rt_info
. If no value is specified in configuration
nor in rt_info
the default value for a model wrapper is used. For Python configuration values are accessible as model wrapper member fields.
List of values#
The list features only model wrappers which introduce new configuration values in their hierarchy.
model_type
: str - name of a model wrapper to be createdlayout
: str - layout of input data in the format: “input0:NCHW,input1:NC”
ImageModel
and its subclasses#
mean_values
: List - normalization values, which will be subtracted from image channels for image-input layer during preprocessingscale_values
: List - normalization values, which will divide the image channels for image-input layerreverse_input_channels
: bool - reverse the input channel orderresize_type
: str - crop, standard, fit_to_window or fit_to_window_letterboxembedded_processing
: bool - flag that pre/postprocessing embeddedpad_value
: int - pad value for resize_image_letterbox embedded into a model
AnomalyDetection
#
image_shape
: List - Input shape of the modelimage_threshold
: float - Image threshold that is used for classifying an image as anomalouspixel_threshold
: float - Pixel level threshold used to segment anomalous regions in the imagenormalization_scale
: float - Scale by which the outputs are divided. Used to apply min-max normalizationtask
: str - Outputs segmentation masks, bounding boxes, or anomaly score based on the task type
ClassificationModel
#
topk
: int - number of most likely labelslabels
: List - list of class labelspath_to_labels
: str - path to file with labels. Overrides the labels, if they sets via ‘labels’ parametermultilabel
: bool - predict a set of labels per imagehierarchical
: bool - predict a hierarchy of labels per image hierarchical_configconfidence_threshold
: float - probability threshold value for multilabel or hierarchical predictions filteringhierarchical_config
: str - a serialized configuration for decoding hierarchical predictionsoutput_raw_scores
: bool - output all scores for multiclass classification
DetectionModel
and its subclasses#
confidence_threshold
: float - probability threshold value for bounding box filteringlabels
: List - List of class labelspath_to_labels
: str - path to file with labels. Overrides the labels, if they sets vialabels
parameter
CTPN
#
iou_threshold
: float - threshold for non-maximum suppression (NMS) intersection over union (IOU) filteringinput_size
: List - image resolution which is going to be processed. Reshapes network to match a given size
FaceBoxes
#
iou_threshold
: float - threshold for non-maximum suppression (NMS) intersection over union (IOU) filtering
NanoDet
#
iou_threshold
: float - threshold for non-maximum suppression (NMS) intersection over union (IOU) filteringnum_classes
: int - number of classes
UltraLightweightFaceDetection
#
iou_threshold
: float - threshold for non-maximum suppression (NMS) intersection over union (IOU) filtering
YOLO
and its subclasses#
iou_threshold
: float - threshold for non-maximum suppression (NMS) intersection over union (IOU) filtering
YoloV4
#
anchors
: List - list of custom anchor valuesmasks
: List - list of mask, applied to anchors for each output layer
YOLOv5
, YOLOv8
#
agnostic_nms
: bool - if True, the model is agnostic to the number of classes, and all classes are considered as oneiou_threshold
: float - threshold for non-maximum suppression (NMS) intersection over union (IOU) filtering
YOLOX
#
iou_threshold
: float - threshold for non-maximum suppression (NMS) intersection over union (IOU) filtering
HpeAssociativeEmbedding
#
target_size
: int - image resolution which is going to be processed. Reshapes network to match a given sizeaspect_ratio
: float - image aspect ratio which is going to be processed. Reshapes network to match a given sizeconfidence_threshold
: float - pose confidence thresholddelta
: floatsize_divisor
: int - width and height of the reshaped model will be a multiple of this valuepadding_mode
: str - center or right_bottom
OpenPose
#
target_size
: int - image resolution which is going to be processed. Reshapes network to match a given sizeaspect_ratio
: float - image aspect ratio which is going to be processed. Reshapes network to match a given sizeconfidence_threshold
: float - pose confidence thresholdupsample_ratio
: int - upsample ratio of a model backbonesize_divisor
: int - width and height of the reshaped model will be a multiple of this value
MaskRCNNModel
#
confidence_threshold
: float - probability threshold value for bounding box filteringlabels
: List - list of class labelspath_to_labels
: str - path to file with labels. Overrides the labels, if they sets vialabels
parameterpostprocess_semantic_masks
: bool - resize and apply 0.5 threshold to instance segmentation masks
SegmentationModel
and its subclasses#
labels
: List - list of class labelspath_to_labels
: str - path to file with labels. Overrides the labels, if they sets via ‘labels’ parameterblur_strength
: int - blurring kernel size. -1 value means no blurring and no soft_thresholdsoft_threshold
: float - probability threshold value for bounding box filtering. inf value means no blurring and no soft_thresholdreturn_soft_prediction
: bool - return raw resized model prediction in addition to processed one
ActionClassificationModel
#
labels
: List - list of class labelspath_to_labels
: str - path to file with labels. Overrides the labels, if they sets via ‘labels’ parametermean_values
: List - normalization values, which will be subtracted from image channels for image-input layer during preprocessingpad_value
: int - pad value for resize_image_letterbox embedded into a modelresize_type
: str - crop, standard, fit_to_window or fit_to_window_letterboxreverse_input_channels
: bool - reverse the input channel orderscale_values
: List - normalization values, which will divide the image channels for image-input layer
NOTE
ActionClassificationModel
isn’t subclass of ImageModel.
Bert
and its subclasses#
vocab
: Dict - mapping from string token to intinput_names
: str - comma-separated names of input layersenable_padding
: bool - should be input sequence padded to max sequence len or not
BertQuestionAnswering
#
output_names
: str - comma-separated names of output layersmax_answer_token_num
: intsquad_ver
: str - SQuAD dataset version used for training. Affects postprocessing
NOTE: OTX
AnomalyBase
model wrapper addsimage_threshold
,pixel_threshold
,min
,max
,threshold
.