OpenVINO™ Inference Interpreter#
Interpreter samples to parse OpenVINO™ inference outputs. This section is related to datumaro/plugins/openvino_plugin.
Models supported from interpreter samples#
There are detection and image classification examples.
Detection (SSD-based)
Intel Pre-trained Models > Object Detection
Public Pre-Trained Models(OMZ) > Object Detection
Image Classification
Public Pre-Trained Models(OMZ) > Classification
You can find more OpenVINO™ Trained Models here To run the inference with OpenVINO™, the model format should be Intermediate Representation(IR). For the Caffe/TensorFlow/MXNet/Kaldi/ONNX models, please see the Model Conversion Instruction
You need to implement your own interpreter samples to support the other OpenVINO™ Trained Models.
Model download#
Prerequisites:
OpenVINO™ (To install OpenVINO™, please see the OpenVINO™ Installation Instruction)
OpenVINO™ models (To download OpenVINO™ models, please see the Model Downloader Instruction)
PASCAL VOC 2012 dataset (To download VOC 2012 dataset, please go VOC2012 download)
Open Model Zoo models can be downloaded with the Model Downloader tool from OpenVINO™ distribution:
cd <openvino_dir>/deployment_tools/open_model_zoo/tools/downloader
./downloader.py --name <model_name>
Example: download the “face-detection-0200” model
cd /opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader
./downloader.py --name face-detection-0200
Model inference#
Prerequisites:
OpenVINO™ (To install OpenVINO™, please see the OpenVINO™ Installation Instruction)
Datumaro (To install Datumaro, please see the user manual)
OpenVINO™ models (To download OpenVINO™ models, please see the Model Downloader Instruction)
PASCAL VOC 2012 dataset (To download VOC 2012 dataset, please go VOC2012 download)
Examples#
To run the inference with OpenVINO™ models and the interpreter samples, please follow the instructions below.
source <openvino_dir>/bin/setupvars.sh
datum project create -o <proj_dir>
datum model add -l <launcher> -p <proj_dir> --copy -- \
-d <path/to/xml> -w <path/to/bin> -i <path/to/interpreter/script>
datum project import -p <proj_dir> -f <format> <path_to_dataset>
datum model run -p <proj_dir> -m model-0
Detection: ssd_mobilenet_v2_coco#
source /opt/intel/openvino/bin/setupvars.sh
cd datumaro/plugins/openvino_plugin
datum project create -o proj
datum model add -l openvino -p proj --copy -- \
--output-layers=do_ExpandDims_conf/sigmoid \
-d model/ssd_mobilenet_v2_coco.xml \
-w model/ssd_mobilenet_v2_coco.bin \
-i samples/ssd_mobilenet_coco_detection_interp.py
datum project import -p proj -f voc VOCdevkit/
datum model run -p proj -m model-0
Classification: mobilenet-v2-pytorch#
source /opt/intel/openvino/bin/setupvars.sh
cd datumaro/plugins/openvino_plugin
datum project create -o proj
datum model add -l openvino -p proj --copy -- \
-d model/mobilenet-v2-pytorch.xml \
-w model/mobilenet-v2-pytorch.bin \
-i samples/mobilenet_v2_pytorch_interp.py
datum project import -p proj -f voc VOCdevkit/
datum model run -p proj -m model-0