geti_sdk.benchmarking
Introduction
The benchmarking package contains the
Benchmarker
class, which provides
methods for benchmarking models that are trained and deployed with Intel® Geti™.
For example, benchmarking local inference rates for your project can help in selecting the model architecture to use for your project, or in assessing the performance of the hardware available for inference.
Module contents
- class geti_sdk.benchmarking.benchmarker.Benchmarker(geti: Geti, project: Project, precision_levels: Sequence[str] | None = None, models: Sequence[Model] | None = None, algorithms: Sequence[str] | None = None, benchmark_images: Sequence[Image | ndarray | PathLike] | None = None, benchmark_video: Video | PathLike | None = None)
Bases:
object
Initialize and manage benchmarking experiments to measure model throughput on different hardware.
- set_task_chain_models(models_task_1: Sequence[Model], models_task_2: Sequence[Model])
Set the models to be used in the benchmark for a task-chain project. The benchmarking will run for all possible combinations of models for task 1 and task 2.
- Parameters:
models_task_1 – Models to use for task #1
models_task_2 – Models to use for task #2
- set_task_chain_algorithms(algorithms_task_1: Sequence[str], algorithms_task_2: Sequence[str])
Set the algorithms to be used in the benchmark for a task-chain project. The benchmarking will run for all possible combinations of algorithms for task 1 and task 2.
Note that upon benchmark initialization, the Benchmarker will check if trained models are available for all algorithms specified
- Parameters:
algorithms_task_1 – Algorithms to use for task #1
algorithms_task_2 – Algorithms to use for task #2
- property algorithms: List[str] | List[List[str]]
Return the algorithm names to be used in the benchmark
- property optimized_models: List[OptimizedModel] | List[List[OptimizedModel]]
Return the optimized models to be used in deployments for the benchmark
- prepare_benchmark(working_directory: PathLike = '.')
Prepare the benchmarking experiments. This involves:
- Ensuring that all required models are available, i.e. that all
specified algorithms have a trained model in the Geti project. If not, training jobs will be started and awaited.
- Ensuring that for each model, optimized models with the required
quantization level are available. If not, optimization jobs will be started and awaited.
Creating and downloading deployments for all models to benchmark.
- Parameters:
working_directory – Output directory to which the deployments for the benchmark will be saved.
- initialize_from_folder(target_folder: PathLike = '.')
Initialize the Benchmarker from a folder containing deployments. This method checks that any directory inside the target_folder contains a valid deployment for the project assigned to this Benchmarker.
- Parameters:
target_folder – Directory containing the model deployments that should be used in the Benchmarking.
- run_throughput_benchmark(working_directory: PathLike = '.', results_filename: str = 'results', target_device: str = 'CPU', frames: int = 200, repeats: int = 3) List[Dict[str, str]]
Run the benchmark experiment.
- Parameters:
working_directory – Directory in which the deployments that should be benchmarked are stored. All output will be saved to this directory.
results_filename – Name of the file to which the results will be saved. File extension should not be included, the results will always be saved as a .csv file. Defaults to results.csv. The results file will be created within the working_directory
target_device – Device to run the inference models on, for example “CPU” or “GPU”. Defaults to “CPU”.
frames – Number of frames/images to infer in order to calculate fps
repeats – Number of times to repeat the benchmark runs. FPS will be averaged over the runs.
- compare_predictions(working_directory: PathLike = '.', saved_image_name: str = 'comparison', target_device: str = 'CPU', image: ndarray | str | PathLike | None = None, include_online_prediction_for_active_model: bool = True, throughput_benchmark_results: List[Dict[str, str]] | PathLike | None = None) ndarray
TODO blank image if not success
Perform visual comparison of predictions from different deployments.
- Parameters:
working_directory – Directory in which the deployments that should be benchmarked are stored. All output will be saved to this directory.
saved_image_name – Name of the file to which the results will be saved. File extension should not be included, the results will always be saved as a .jpg file. Defaults to comparison.jpg. The results file will be created within the working_directory
target_device – Device to run the inference models on, for example “CPU” or “GPU”. Defaults to “CPU”.
image – Image to use for comparison. If no image is passed, the first image in the images list will be used.
include_online_prediction_for_active_model – Flag to include prediction from the active model on the platform side.
throughput_benchmark_results – Results from a throughput benchmark run. If this is passed, the captions for the images will contain the benchmark results.
- Returns:
Image containing visual comparison in form of a NumPy array.