Model#

exception model_api.models.model.WrapperError(wrapper_name, message)#

Bases: Exception

The class for errors occurred in Model API wrappers

class model_api.models.model.Model(inference_adapter, configuration={}, preload=False)#

Bases: object

An abstract model wrapper

The abstract model wrapper is free from any executor dependencies. It sets the InferenceAdapter instance with the provided model and defines model inputs/outputs.

Next, it loads the provided configuration variables and sets it as wrapper attributes. The keys of the configuration dictionary should be presented in the parameters method.

Also, it decorates the following adapter interface:
  • Loading the model to the device

  • The model reshaping

  • Synchronous model inference

  • Asynchronous model inference

The preprocess and postprocess methods must be implemented in a specific inherited wrapper.

logger#

instance of the Logger

Type:

Logger

inference_adapter#

allows working with the specified executor

Type:

InferenceAdapter

inputs#

keeps the model inputs names and Metadata structure for each one

Type:

dict

outputs#

keeps the model outputs names and Metadata structure for each one

Type:

dict

model_loaded#

a flag whether the model is loaded to device

Type:

bool

Model constructor

Parameters:
  • inference_adapter (InferenceAdapter) – allows working with the specified executor

  • configuration (dict, optional) – it contains values for parameters accepted by specific wrapper (confidence_threshold, labels etc.) which are set as data attributes

  • preload (bool, optional) – a flag whether the model is loaded to device while initialization. If preload=False, the model must be loaded via load method before inference

Raises:

WrapperError – if the wrapper configuration is incorrect

__call__(inputs)#

Applies preprocessing, synchronous inference, postprocessing routines while one call.

Parameters:

inputs (ndarray) – raw input data, the data type is defined by wrapper

Returns:

  • postprocessed data in the format defined by wrapper

classmethod available_wrappers()#
await_all()#
await_any()#
classmethod create_model(model, model_type=None, configuration={}, preload=True, core=None, weights_path=None, adaptor_parameters={}, device='AUTO', nstreams='1', nthreads=None, max_num_requests=0, precision='FP16', download_dir=None, cache_dir=None)#

Create an instance of the Model API model

Parameters:
  • model (str| InferenceAdapter) – model name from OpenVINO Model Zoo, path to model, OVMS URL, or an adapter

  • configuration (dict, optional) – dictionary of model config with model properties, for example confidence_threshold, labels

  • model_type (str, optional) – name of model wrapper to create (e.g. “ssd”)

  • preload (bool, optional) – whether to call load_model(). Can be set to false to reshape model before loading.

  • core (optional) – openvino.Core instance, passed to OpenvinoAdapter

  • weights_path (str, optional) – path to .bin file with model weights

  • adaptor_parameters (dict, optional) – parameters of ModelAdaptor

  • device (str, optional) – name of OpenVINO device (e.g. “CPU, GPU”)

  • nstreams (int, optional) – number of inference streams

  • nthreads (int, optional) – number of threads to use for inference on CPU

  • max_num_requests (int, optional) – number of infer requests for asynchronous inference

  • precision (str, optional) – inference precision (e.g. “FP16”)

  • download_dir (str, optional) – directory where to store downloaded models

  • cache_dir (str, optional) – directory where to store compiled models to reduce the load time before the inference.

Return type:

Any

Returns:

Model object

get_model()#
classmethod get_model_class(name)#
Return type:

Type

classmethod get_subclasses()#
Return type:

list[Any]

infer_async(input_data, user_data)#
infer_async_raw(dict_data, callback_data)#
infer_batch(inputs)#

Applies preprocessing, asynchronous inference, postprocessing routines to a collection of inputs.

Parameters:

inputs (list) – a list of inputs for inference

Returns:

a list of inference results

Return type:

list

infer_sync(dict_data)#
Return type:

dict[str, ndarray]

is_ready()#
load(force=False)#
Return type:

None

log_layers_info()#

Prints the shape, precision and layout for all model inputs/outputs.

classmethod parameters()#

Defines the description and type of configurable data parameters for the wrapper.

See types.py to find available types of the data parameter. For each parameter the type, default value and description must be provided.

The example of possible data parameter:
‘confidence_threshold’: NumericalValue(

default_value=0.5, description=”Threshold value for detection box confidence”

)

The method must be implemented in each specific inherited wrapper.

Return type:

dict[str, Any]

Returns:

  • the dictionary with defined wrapper data parameters

postprocess(outputs, meta)#

Interface for postprocess method.

Parameters:
  • outputs (dict) –

    model raw output in the following format: {

    ’output_layer_name_1’: raw_result_1, ‘output_layer_name_2’: raw_result_2, …

    }

  • meta (dict) – the input metadata obtained from preprocess method

Returns:

  • postprocessed data in the format defined by wrapper

preprocess(inputs)#

Interface for preprocess method.

Parameters:

inputs – raw input data, the data type is defined by wrapper

Returns:

  • the preprocessed data which is submitted to the model for inference

    and has the following format: {

    ’input_layer_name_1’: data_1, ‘input_layer_name_2’: data_2, …

    }

  • the input metadata, which might be used in postprocess method

static process_callback(request, callback_data)#
classmethod raise_error(message)#

Raises the WrapperError.

Parameters:

message (str) – error message to be shown in the following format: “WrapperName: message”

Return type:

NoReturn

reshape(new_shape)#
save(path, weights_path='', version='UNSPECIFIED')#
set_callback(callback_fn)#