Ovms Adapter#
- class model_api.adapters.ovms_adapter.OVMSAdapter(target_model)#
Bases:
InferenceAdapter
Class that allows working with models served by the OpenVINO Model Server
Expected format: <address>:<port>/models/<model_name>[:<model_version>]
- await_all()#
In case of asynchronous execution waits the completion of all busy infer requests.
- await_any()#
In case of asynchronous execution waits the completion of any busy infer request until it becomes available for the data submission.
- embed_preprocessing(layout, resize_mode, interpolation_mode, target_shape, pad_value, dtype=<class 'type'>, brg2rgb=False, mean=None, scale=None, input_idx=0)#
Embeds preprocessing into the model using OpenVINO preprocessing API
- get_input_layers()#
- Gets the names of model inputs and for each one creates the Metadata structure,
which contains the information about the input shape, layout, precision in OpenVINO format, meta (optional)
- Returns:
the dict containing Metadata for all inputs
- get_model()#
Return the reference to the GrpcClient.
- get_output_layers()#
- Gets the names of model outputs and for each one creates the Metadata structure,
which contains the information about the output shape, layout, precision in OpenVINO format, meta (optional)
- Returns:
the dict containing Metadata for all outputs
- get_raw_result(infer_result)#
Gets raw results from the internal inference framework representation as a dict.
- Parameters:
infer_result (-) – framework-specific result of inference from the model
- Returns:
- {
‘output_layer_name_1’: raw_result_1, ‘output_layer_name_2’: raw_result_2, …
}
- Return type:
raw result (dict) - model raw output in the following format
- get_rt_info(path)#
Forwards to openvino.Model.get_rt_info(path)
- infer_async(dict_data, callback_data)#
Performs the asynchronous model inference and sets the callback for inference completion. Also, it should define get_raw_result() function, which handles the result of inference from the model.
- Parameters:
dict_data (-) –
it’s submitted to the model for inference and has the following format: {
’input_layer_name_1’: data_1, ‘input_layer_name_2’: data_2, …
}
callback_data (-) – the data for callback, that will be taken after the model inference is ended
- infer_sync(dict_data)#
Performs the synchronous model inference. The infer is a blocking method.
- Parameters:
dict_data (-) –
it’s submitted to the model for inference and has the following format: {
’input_layer_name_1’: data_1, ‘input_layer_name_2’: data_2, …
}
- Returns:
- {
‘output_layer_name_1’: raw_result_1, ‘output_layer_name_2’: raw_result_2, …
}
- Return type:
raw result (dict) - model raw output in the following format
- is_ready()#
In case of asynchronous execution checks if one can submit input data to the model for inference, or all infer requests are busy.
- Returns:
- the boolean flag whether the input data can be
submitted to the model for inference or not
- load_model()#
Loads the model on the device.
- reshape_model(new_shape)#
Reshapes the model inputs to fit the new input shape.
- Parameters:
new_shape (-) –
the dictionary with inputs names as keys and list of new shape as values in the following format: {
’input_layer_name_1’: [1, 128, 128, 3], ‘input_layer_name_2’: [1, 128, 128, 3], …
}
- save_model(path, weights_path='', version='UNSPECIFIED')#
Serializes model to the filesystem.
- set_callback(callback_fn)#
- update_model_info(model_info)#
Updates model with the provided model info.