Onnx Adapter#

class model_api.adapters.onnx_adapter.ONNXRuntimeAdapter(model, ort_options={})#

Bases: InferenceAdapter

This inference adapter allows running ONNX models via ONNXRuntime. The adapter has limited functionality: it supports only image models generated by OpenVINO training extensions (OTX: openvinotoolkit/training_extensions). Each onnx file generated by OTX contains ModelAPI-style metadata, which is used for configuring a particular model acting on top of it. Models scope is limited to SSD, MaskRCNNModel, SegmentationModel, and ClassificationModel wrappers. Also, this adapter doesn’t provide asynchronous inference functionality and model reshaping.

Args: model (str): Filename or serialized ONNX model in a byte string. ort_options (dict): parameters that will be forwarded to onnxruntime.InferenceSession

await_all()#

In case of asynchronous execution waits the completion of all busy infer requests.

await_any()#

In case of asynchronous execution waits the completion of any busy infer request until it becomes available for the data submission.

embed_preprocessing(layout, resize_mode, interpolation_mode, target_shape, pad_value, dtype=<class 'int'>, brg2rgb=False, mean=None, scale=None, input_idx=0)#

Adds external preprocessing steps done before ONNX model execution.

get_input_layers()#
Gets the names of model inputs and for each one creates the Metadata structure,

which contains the information about the input shape, layout, precision in OpenVINO format, meta (optional)

Returns:

  • the dict containing Metadata for all inputs

get_model()#

Return a reference to the ONNXRuntime session.

get_output_layers()#
Gets the names of model outputs and for each one creates the Metadata structure,

which contains the information about the output shape, layout, precision in OpenVINO format, meta (optional)

Returns:

  • the dict containing Metadata for all outputs

get_raw_result(infer_result)#

Gets raw results from the internal inference framework representation as a dict.

Parameters:

infer_result (-) – framework-specific result of inference from the model

Returns:

{

‘output_layer_name_1’: raw_result_1, ‘output_layer_name_2’: raw_result_2, …

}

Return type:

  • raw result (dict) - model raw output in the following format

get_rt_info(path)#

Returns an attribute stored in model info.

Parameters:

path (list[str]) – a sequence of tag names leading to the attribute.

Returns:

a value stored under corresponding tag sequence.

Return type:

Any

infer_async(dict_data, callback_data)#

Performs the asynchronous model inference and sets the callback for inference completion. Also, it should define get_raw_result() function, which handles the result of inference from the model.

Parameters:
  • dict_data (-) –

    it’s submitted to the model for inference and has the following format: {

    ’input_layer_name_1’: data_1, ‘input_layer_name_2’: data_2, …

    }

  • callback_data (-) – the data for callback, that will be taken after the model inference is ended

infer_sync(dict_data)#

Performs the synchronous model inference. The infer is a blocking method.

Parameters:

dict_data (-) –

it’s submitted to the model for inference and has the following format: {

’input_layer_name_1’: data_1, ‘input_layer_name_2’: data_2, …

}

Returns:

{

‘output_layer_name_1’: raw_result_1, ‘output_layer_name_2’: raw_result_2, …

}

Return type:

  • raw result (dict) - model raw output in the following format

is_ready()#

In case of asynchronous execution checks if one can submit input data to the model for inference, or all infer requests are busy.

Returns:

  • the boolean flag whether the input data can be

    submitted to the model for inference or not

load_model()#

Loads the model on the device.

reshape_model(new_shape)#

“Not supported by ONNX adapter.

save_model(path, weights_path=None, version=None)#

Serializes model to the filesystem.

Parameters:
  • path (str) – paths to save .onnx file.

  • weights_path (str | None) – not used by ONNX adapter.

  • version (str | None) – not used by ONNX adapter.

set_callback(callback_fn)#

Sets callback that grabs results of async inference.

Parameters:

callback_fn (Callable) – Callback function.

update_model_info(model_info)#

Updates model with the provided model info. Model info dict can also contain nested dicts.

Parameters:

model_info (dict[str, Any]) – model info dict to write to the model.

model_api.adapters.onnx_adapter.change_layout(image, layout)#

Changes the input image layout to fit the layout of the model input layer.

Parameters:

inputs (ndarray) – a single image as 3D array in HWC layout

Returns:

  • the image with layout aligned with the model layout

model_api.adapters.onnx_adapter.get_shape_from_onnx(onnx_shape)#