geti_sdk.rest_clients

Introduction

The rest_clients package contains clients for interacting with the various entities ( such as Project, Image and Model) on the Intel® Geti™ server.

The rest clients are initialized with a GetiSession and a workspace id. The ProjectClient can be initialized with just that, while all other clients are initialized per project and thus take an additional project argument.

For example, to initialize the ImageClient for a specific project and get a list of all images in the project, the following code snippet can be used:

from geti_sdk import Geti
from geti_sdk.rest_clients import ProjectClient, ImageClient

geti = Geti(
  host="https://0.0.0.0", username="dummy_user", password="dummy_password"
)

project_client = ProjectClient(
    session=geti.session, workspace_id=geti.workspace_id
)
project = project_client.get_project_by_name(project_name='dummy_project')

image_client = ImageClient(
    session=geti.session, workspace_id=geti.workspace_id, project=project
)
image_client.get_all_images()

Module contents

class geti_sdk.rest_clients.project_client.project_client.ProjectClient(session: GetiSession, workspace_id: str)

Class to manipulate projects on the Intel® Geti™ server, within a certain workspace.

get_all_projects(request_page_size: int = 50, get_project_details: bool = True) List[Project]

Return a list of projects found on the Intel® Geti™ server

Parameters:
  • request_page_size – Max number of projects to fetch in a single HTTP request. Higher values may reduce the response time of this method when there are many projects, but increase the chance of timeout.

  • get_project_details – True to get all details of the projects on the Intel® Geti™, False to fetch only a summary of each project. Set this to False if minimizing latency is a concern. Defaults to True

Returns:

List of Project objects, containing the project information for each project on the Intel® Geti™ server

get_project_by_name(project_name: str) Project | None

Get a project from the Intel® Geti™ server by project_name.

If multiple projects with the same name exist on the server, this method will raise a ValueError. In that case, please use the ProjectClient.get_project() method and provide a project_id to uniquely identify the project.

Parameters:

project_name – Name of the project to get

Raises:

ValueError in case multiple projects with the specified name exist on the server, and no project_id is provided in order to allow unique identification of the project.

Returns:

Project object containing the data of the project, if the project is found on the server. Returns None if the project doesn’t exist.

get_or_create_project(project_name: str, project_type: str, labels: List[List[str] | List[Dict[str, Any]]]) Project

Create a new project with name project_name on the cluster, or retrieve the data for an existing project with project_name if it exists.

Parameters:
  • project_name – Name of the project

  • project_type – Type of the project

  • labels – Nested list of labels

Returns:

create_project(project_name: str, project_type: str, labels: List[List[str] | List[Dict[str, Any]]]) Project

Create a new project with name project_name on the cluster, containing tasks according to the project_type specified. Labels for each task are specified in the labels parameter, which should be a nested list (each entry in the outermost list corresponds to the labels for one of the tasks in the project pipeline)

Parameters:
  • project_name – Name of the project

  • project_type – Type of the project

  • labels – Nested list of labels

Raises:

ValueError – If a project with name project_name already exists in the workspace

Returns:

Project object, as created on the cluster

download_project_info(project: Project, path_to_folder: str) None

Get the project data that can be used for project creation on the Intel® Geti™ server. From the returned data, the method ProjectClient.get_or_create_project can create a project on the Intel® Geti™ server. The data is retrieved from the cluster and saved in the target folder path_to_folder.

Parameters:
  • project – Project to download the data for

  • path_to_folder – Target folder to save the project data to. Data will be saved as a .json file named “project.json”

Raises:

ValueError – If the project with project_name is not found on the cluster

create_project_from_folder(path_to_folder: str, project_name: str | None = None) Project

Look for a project.json file in the folder at path_to_folder, and create a project using the parameters provided in this file.

Parameters:
  • path_to_folder – Folder holding the project data

  • project_name – Optional name of the project. If not specified, the project name found in the project configuration in the upload folder will be used.

Returns:

Project as created on the cluster

list_projects() List[Project]

Print an overview of all projects that currently exists on the cluster, in the workspace managed by the ProjectClient

NOTE: While this method also returns a list of all the projects, it is primarily meant to be used in an interactive environment, such as a Jupyter Notebook.

Returns:

List of all Projects on the cluster. The returned list is the same as the list returned by the get_all_projects method

delete_project(project: Project, requires_confirmation: bool = True) None

Delete a project.

By default, this method will ask for user confirmation before deleting the project. This can be overridden by passing requires_confirmation = False.

Parameters:
  • project – Project to delete

  • requires_confirmation – True to ask for user confirmation before deleting the project, False to delete without confirmation. Defaults to True

add_labels(labels: List[str] | List[Dict[str, Any]], project: Project, task: Task | None = None, revisit_affected_annotations: bool = False) Project

Add the labels to the project labels. For a project with multiple tasks, the task parameter can be used to specify the task for which the labels should be added.

Parameters:
  • labels – List of labels to add. Can either be a list of strings representing label names, or a list of dictionaries representing label properties

  • project – Project to which the labels should be added

  • task – Optional Task to add the labels for. Can be left as None for a single task project, but is required for a task chain project

  • revisit_affected_annotations – True to make sure that the server will assign a to_revisit status to all annotations linked to the label(s) that are added. False to not revisit any potentially linked annotations.

Returns:

Updated Project instance with the new labels added to it

get_project_by_id(project_id: str) Project | None

Get a project from the Intel® Geti™ server by project_id.

Parameters:

project_id – ID of the project to get

Returns:

Project object containing the data of the project, if the project is found on the server. Returns None if the project doesn’t exist

get_project(project_name: str | None = None, project_id: str | None = None, project: Project | None = None) Project | None

Get a project from the Intel® Geti™ server by project_name or project_id, or update a provided Project object with the latest data from the server.

Parameters:
  • project_name – Name of the project to get

  • project_id – ID of the project to get

  • project – Project object to update with the latest data from the server

Returns:

Project object containing the data of the project, if the project is found on the server. Returns None if the project doesn’t exist

class geti_sdk.rest_clients.dataset_client.DatasetClient(workspace_id: str, project: Project, session: GetiSession)

Class to manage datasets for a certain Intel® Geti™ project.

create_dataset(name: str) Dataset

Create a new dataset named name inside the project.

Parameters:

name – Name of the dataset to create

Returns:

The created dataset

delete_dataset(dataset: Dataset) None

Delete provided dataset inside the project.

Parameters:

dataset – Dataset to delete

get_all_datasets() List[Dataset]

Query the Intel® Geti™ server to retrieve an up to date list of datasets in the project.

Returns:

List of current datasets in the project

get_dataset_statistics(dataset: Dataset) dict

Retrieve the media and annotation statistics for a particular dataset

get_dataset_by_name(dataset_name: str) Dataset

Retrieve a dataset by name

Parameters:

dataset_name – Name of the dataset to retrieve

Returns:

Dataset object

has_dataset_subfolders(path_to_folder: str) bool

Check if a project folder has it’s media folders organized according to the datasets in the project

Parameters:

path_to_folder – Path to the project folder to check

Returns:

True if the media folders in the project folder tree are organized according to the datasets in the project.

get_training_dataset_summary(model: Model) TrainingDatasetStatistics

Return information concerning the training dataset for the model. This includes the number of images and video frames, and the statistics for the subset splitting (i.e. the number of training, test and validation images/video frames)

Parameters:

model – Model to get the training dataset for

Returns:

A TrainingDatasetStatistics object, containing the training dataset statistics for the model

get_media_in_training_dataset(model: Model, subset: str = 'training') Subset

Return the media in the training dataset for the model, for the specified subset. Subset can be training, validation or testing.

Parameters:
  • model – Model for which to get the media in the training dataset

  • subset – The subset for which to return the media items. Can be either training (the default), validation or testing

return: A Subset object, containing lists of images and video_frames in

the requested subset

Raises:

ValueError if the DatasetClient is unable to fetch the required dataset information from the model

class geti_sdk.rest_clients.media_client.image_client.ImageClient(session: GetiSession, workspace_id: str, project: Project)

Class to manage image uploads and downloads for a certain project.

get_all_images(dataset: Dataset | None = None) MediaList[Image]

Get the ID’s and filenames of all images in the project, from a specific dataset. If no dataset is passed, images from the training dataset will be returned

Parameters:

dataset – Dataset for which to retrieve the images. If no dataset is passed, images from the training dataset are returned.

Returns:

MediaList containing all Image entities in the dataset

upload_image(image: ndarray | str | PathLike, dataset: Dataset | None = None) Image

Upload an image file to the server.

Parameters:
  • image – full path to the image on disk, or numpy array representing the image

  • dataset – Dataset to which to upload the image. If no dataset is passed, the image is uploaded to the training dataset

Returns:

Image object representing the uploaded image on the server

upload_folder(path_to_folder: str, n_images: int = -1, skip_if_filename_exists: bool = False, dataset: Dataset | None = None, max_threads: int = 5) MediaList[Image]

Upload all images in a folder to the project. Returns a MediaList containing all images in the project after upload.

Parameters:
  • path_to_folder – Folder with images to upload

  • n_images – Number of images to upload from folder

  • skip_if_filename_exists – Set to True to skip uploading of an image if an image with the same filename already exists in the project. Defaults to False

  • dataset – Dataset to which to upload the images. If no dataset is passed, the images are uploaded to the training dataset

  • max_threads – Maximum number of threads to use for uploading. Defaults to 5. Set to -1 to use all available threads.

Returns:

MediaList containing all image’s in the project

download_all(path_to_folder: str, append_image_uid: bool = False, max_threads: int = 10, dataset: Dataset | None = None) None

Download all images in a project or a dataset to a folder on the local disk.

Parameters:
  • path_to_folder – path to the folder in which the images should be saved

  • append_image_uid – True to append the UID of an image to the filename (separated from the original filename by an underscore, i.e. ‘{filename}_{image_id}’). If there are images in the project/dataset with duplicate filename, this must be set to True to ensure all images are downloaded. Otherwise, images with the same name will be skipped.

  • max_threads – Maximum number of threads to use for downloading. Defaults to 10. Set to -1 to use all available threads.

  • dataset – Dataset from which to download the images. If no dataset is provided, images from all datasets are downloaded.

upload_from_list(path_to_folder: str, image_names: List[str], extension_included: bool = False, n_images: int = -1, skip_if_filename_exists: bool = False, image_names_as_full_paths: bool = False, dataset: Dataset | None = None, max_threads: int = 5) MediaList[Image]

From a folder containing images path_to_folder, this method uploads only those images that have their filenames included in the image_names list.

Parameters:
  • path_to_folder – Folder containing the images

  • image_names – List of names of the images that should be uploaded

  • extension_included – Set to True if the extension of the image is included in the name, for each image in the image_names list. Defaults to False

  • n_images – Number of images to upload from the list

  • skip_if_filename_exists – Set to True to skip uploading of an image if an image with the same filename already exists in the project. Defaults to False

  • image_names_as_full_paths – Set to True if the list of image_names contains full paths to the images, rather than just the filenames

  • dataset – Dataset to which to upload the images. If no dataset is passed, the images are uploaded to the training dataset

  • max_threads – Maximum number of threads to use for uploading images. Defaults to 5. Set to -1 to use all available threads.

Returns:

List of images that were uploaded

delete_images(images: Sequence[Image]) bool

Delete all Image entities in images from the project.

Parameters:

images – List of Image entities to delete

Returns:

True if all images on the list were deleted successfully, False otherwise

class geti_sdk.rest_clients.media_client.video_client.VideoClient(session: GetiSession, workspace_id: str, project: Project)

Class to manage video uploads and downloads for a certain project

get_all_videos(dataset: Dataset | None = None) MediaList[Video]

Get the ID’s and filenames of all videos in the project, from a specific dataset. If no dataset is passed, videos from the training dataset will be returned

Parameters:

dataset – Dataset for which to retrieve the videos. If no dataset is passed, videos from the training dataset are returned.

Returns:

A list containing all Video’s in the project

upload_video(video: ndarray | str | PathLike, dataset: Dataset | None = None) Video

Upload a video file to the server. Accepts either a path to a video file, or a numpy array containing pixel data for video frames.

In case a numpy array is passed, this method expects the array to be 4 dimensional, it’s dimensions shaped as: [frames, heigth, width, channels]. The framerate of the created video will be set to 1 fps.

Parameters:
  • video – full path to the video on disk, or numpy array holding the video pixel data

  • dataset – Dataset to which to upload the video. If no dataset is passed, the video is uploaded to the training dataset

Returns:

Video object representing the uploaded video on the server

upload_folder(path_to_folder: str, n_videos: int = -1, skip_if_filename_exists: bool = False, dataset: Dataset | None = None, max_threads: int = 5) MediaList[Video]

Upload all videos in a folder to the project. Returns the mapping of video filename to the unique ID assigned by Intel Geti.

Parameters:
  • path_to_folder – Folder with videos to upload

  • n_videos – Number of videos to upload from folder

  • skip_if_filename_exists – Set to True to skip uploading of a video if a video with the same filename already exists in the project. Defaults to False

  • dataset – Dataset to which to upload the video. If no dataset is passed, the video is uploaded to the training dataset

  • max_threads – Maximum number of threads to use for downloading. Defaults to 5. Set to -1 to use all available threads.

Returns:

MediaList containing all video’s in the project

download_all(path_to_folder: str, append_video_uid: bool = False, max_threads: int = 10, dataset: Dataset | None = None) None

Download all videos in a project to a folder on the local disk.

Parameters:
  • path_to_folder – path to the folder in which the videos should be saved

  • append_video_uid – True to append the UID of a video to the filename (separated from the original filename by an underscore, i.e. ‘{filename}_{video_id}’). If there are videos in the project with duplicate filename, this must be set to True to ensure all videos are downloaded. Otherwise videos with the same name will be skipped.

  • max_threads – Maximum number of threads to use for downloading. Defaults to 10. Set to -1 to use all available threads.

  • dataset – Dataset from which to download the videos. If no dataset is passed, videos from all datasets are downloaded.

delete_videos(videos: Sequence[Video]) bool

Delete all Video entities in videos from the project.

Parameters:

videos – List of Video entities to delete

Returns:

True if all videos on the list were deleted successfully, False otherwise

class geti_sdk.rest_clients.annotation_clients.annotation_client.AnnotationClient(session: GetiSession, workspace_id: str, project: Project, annotation_reader: AnnotationReaderType | None = None)

Class to up- or download annotations for images or videos to an existing project.

get_latest_annotations_for_video(video: Video) List[AnnotationScene]

Retrieve all latest annotations for a video from the cluster.

If the video does not have any annotations yet, this method returns an empty list

Parameters:

video – Video to get the annotations for

Returns:

List of AnnotationScene’s, each entry corresponds to an AnnotationScene for a single frame in the video

upload_annotations_for_video(video: Video, append_annotations: bool = False, max_threads: int = 5)

Upload annotations for a video. If append_annotations is set to True, annotations will be appended to the existing annotations for the video in the project. If set to False, existing annotations will be overwritten.

Parameters:
  • video – Video to upload annotations for

  • append_annotations – True to append annotations from the local disk to the existing annotations on the server, False to overwrite the server annotations by those on the local disk.

  • max_threads – Maximum number of threads to use for uploading. Defaults to 5. Set to -1 to use all available threads.

Returns:

upload_annotations_for_videos(videos: Sequence[Video], append_annotations: bool = False, max_threads: int = 5)

Upload annotations for a list of videos. If append_annotations is set to True, annotations will be appended to the existing annotations for the video in the project. If set to False, existing annotations will be overwritten.

Parameters:
  • videos – List of videos to upload annotations for

  • append_annotations – True to append annotations from the local disk to the existing annotations on the server, False to overwrite the server annotations by those on the local disk.

  • max_threads – Maximum number of threads to use for uploading. Defaults to 5. Set to -1 to use all available threads.

Returns:

upload_annotations_for_images(images: Sequence[Image], append_annotations: bool = False, max_threads: int = 5)

Upload annotations for a list of images. If append_annotations is set to True, annotations will be appended to the existing annotations for the image in the project. If set to False, existing annotations will be overwritten.

Parameters:
  • images – List of images to upload annotations for

  • append_annotations – True to append annotations from the local disk to the existing annotations on the server, False to overwrite the server annotations by those on the local disk.

  • max_threads – Maximum number of threads to use for uploading. Defaults to 5. Set to -1 to use all available threads.

Returns:

download_annotations_for_video(video: Video, path_to_folder: str, append_video_uid: bool = False, max_threads: int = 10) float

Download video annotations from the server to a target folder on disk.

Parameters:
  • video – Video for which to download the annotations

  • path_to_folder – Folder to save the annotations to

  • append_video_uid – True to append the UID of the video to the annotation filename (separated from the original filename by an underscore, i.e. ‘{filename}_{media_id}’). This can be useful if the project contains videos with duplicate filenames. If left as False, the video filename and frame index for the annotation are used as filename for the downloaded annotation.

  • max_threads – Maximum number of threads to use for downloading. Defaults to 10. Set to -1 to use all available threads.

Returns:

Returns the time elapsed to download the annotations, in seconds

download_annotations_for_images(images: MediaList[Image], path_to_folder: str, append_image_uid: bool = False, max_threads: int = 10) float

Download image annotations from the server to a target folder on disk.

Parameters:
  • images – List of images for which to download the annotations

  • path_to_folder – Folder to save the annotations to

  • append_image_uid

    True to append the UID of the image to the annotation filename (separated from the original filename by an underscore,

    i.e. ‘{filename}_{media_id}’). This can be useful if the project contains images with duplicate filenames. If left as False, the image filename is used as filename for the downloaded annotation as well.

  • max_threads – Maximum number of threads to use for downloading. Defaults to 10. Set to -1 to use all available threads.

Returns:

Returns the time elapsed to download the annotations, in seconds

download_annotations_for_videos(videos: MediaList[Video], path_to_folder: str, append_video_uid: bool = False, max_threads: int = 10) float

Download annotations for a list of videos from the server to a target folder on disk.

Parameters:
  • videos – List of videos for which to download the annotations

  • path_to_folder – Folder to save the annotations to

  • append_video_uid – True to append the UID of the video to the annotation filename (separated from the original filename by an underscore, i.e. ‘{filename}_{media_id}’). This can be useful if the project contains videos with duplicate filenames. If left as False, the video filename and frame index for the annotation are used as filename for the downloaded annotation.

  • max_threads – Maximum number of threads to use for downloading. Defaults to 10. Set to -1 to use all available threads.

Returns:

Time elapsed to download the annotations, in seconds

download_all_annotations(path_to_folder: str, max_threads: int = 10) None

Download all annotations for the project to a target folder on disk.

Parameters:
  • path_to_folder – Folder to save the annotations to

  • max_threads – Maximum number of threads to use for downloading. Defaults to 10. Set to -1 to use all available threads.

upload_annotations_for_all_media(append_annotations: bool = False, max_threads: int = 5)

Upload annotations for all media in the project, If append_annotations is set to True, annotations will be appended to the existing annotations for the media on the server. If set to False, existing annotations will be overwritten.

Parameters:
  • append_annotations – True to append annotations from the local disk to the existing annotations on the server, False to overwrite the server annotations by those on the local disk. Defaults to False.

  • max_threads – Maximum number of threads to use for uploading. Defaults to 5. Set to -1 to use all available threads.

upload_annotation(media_item: Image | VideoFrame, annotation_scene: AnnotationScene) AnnotationScene

Upload an annotation for an image or video frame to the Intel® Geti™ server.

Parameters:
  • media_item – Image or VideoFrame to apply and upload the annotation to

  • annotation_scene – AnnotationScene to upload

Returns:

The uploaded annotation

get_annotation(media_item: Image | VideoFrame) AnnotationScene | None

Retrieve the latest annotations for an image or video frame from the Intel® Geti™ platform. If no annotation is available, this method returns None.

Parameters:

media_item – Image or VideoFrame to retrieve the annotations for

Returns:

AnnotationScene instance containing the latest annotation data

class geti_sdk.rest_clients.configuration_client.ConfigurationClient(workspace_id: str, project: Project, session: GetiSession)

Class to manage configuration for a certain project.

get_task_configuration(task_id: str, algorithm_name: str | None = None) TaskConfiguration

Get the configuration for the task with id task_id.

Parameters:
  • task_id – ID of the task to get configurations for

  • algorithm_name – Optional name of the algorithm to get configuration for. If an algorithm name is passed, the returned TaskConfiguration will contain only the hyper parameters for that algorithm, and won’t hold any component parameters

Returns:

TaskConfiguration holding all component parameters and hyper parameters for the task

get_global_configuration() GlobalConfiguration

Get the project-wide configurable parameters.

Returns:

GlobalConfiguration instance holding the configurable parameters for all project-wide components

set_project_auto_train(auto_train: bool = False) None

Set the auto_train parameter for all tasks in the project.

Parameters:

auto_train – True to enable auto_training, False to disable

set_project_num_iterations(value: int = 50)

Set the number of iterations to train for each task in the project.

Parameters:

value – Number of iterations to set

set_project_parameter(parameter_name: str, value: bool | str | float | int, parameter_group_name: str | None = None)

Set the value for a parameter with parameter_name that lives in the group parameter_group_name. The parameter is set for all tasks in the project

The parameter_group_name can be left as None, in that case this method will attempt to determine the appropriate parameter group automatically.

Parameters:
  • parameter_name – Name of the parameter

  • parameter_group_name – Optional name of the parameter group name to which the parameter belongs. If left as None (the default), this method will attempt to determine the correct parameter group automatically, if needed.

  • value – Value to set for the parameter

get_full_configuration() FullConfiguration

Return the full configuration for a project (for both global and task_chain).

Returns:

FullConfiguration object holding the global and task chain configuration

get_for_task_and_algorithm(task: Task, algorithm: Algorithm)

Get the hyper parameters for a specific task and algorithm.

Parameters:
  • task – Task to get hyper parameters for

  • algorithm – Algorithm to get hyper parameters for

Returns:

TaskConfiguration holding only the model hyper parameters for the specified algorithm

download_configuration(path_to_folder: str) FullConfiguration

Retrieve the full configuration for a project from the cluster and save it to a file configuration.json in the folder specified at path_to_folder.

Parameters:

path_to_folder – Folder to save the configuration to

Returns:

apply_from_object(configuration: FullConfiguration) FullConfiguration | None

Attempt to apply the configuration values passed in as configuration to the project managed by this instance of the ConfigurationClient.

Parameters:

configuration – FullConfiguration to be applied

Returns:

apply_from_file(path_to_folder: str, filename: str | None = None) FullConfiguration | None

Attempt to apply a configuration from a file on disk. The parameter path_to_folder is mandatory and should point to the folder in which the configuration file to upload lives. The parameter filename is optional, when left as None this method will look for a file configuration.json in the specified folder.

Parameters:
  • path_to_folder – Path to the folder in which the configuration file to apply lives

  • filename – Optional filename for the configuration file to apply

Returns:

set_configuration(configuration: FullConfiguration | GlobalConfiguration | TaskConfiguration)

Set the configuration for the project. This method accepts either a FullConfiguration, TaskConfiguration or GlobalConfiguration object

Parameters:

configuration – Configuration to set

Returns:

get_for_model(task_id: str, model_id: str) TaskConfiguration

Get the hyper parameters for the model with id model_id. Note that the model has to be trained within the task with id task_id in order for the parameters to be retrieved successfully.

Parameters:
  • task_id – ID of the task to get configurations for

  • model_id – ID of the model to get the hyper parameters for

Returns:

TaskConfiguration holding all hyper parameters for the model

class geti_sdk.rest_clients.prediction_client.PredictionClient(session: GetiSession, project: Project, workspace_id: str)

Class to download predictions from an existing Intel® Geti™ project.

property ready_to_predict

Return True if the project is ready to yield predictions, False otherwise.

Returns:

property mode: PredictionMode
Return the current mode used to retrieve predictions. There are three options:
  • auto

  • latest

  • online

Auto will fetch prediction from the database if it is up to date and otherwise send an inference request. Online will always send an inference request and latest will not send an inference request but grabs the latest result from the database.

By default, the mode is set to auto.

Returns:

Current PredictionMode used to retrieve predictions

get_image_prediction(image: Image) Prediction

Get a prediction for an image from the Intel® Geti™ server, if available.

NOTE: This method is only available for images that are already existing on the server! For getting predictions on a ‘new’ image, please see the PredictionClient.predict_image method

Parameters:

image – Image to get the prediction for. The image has to be present in the project on the cluster already.

Returns:

Prediction for the image

get_video_frame_prediction(video_frame: VideoFrame) Prediction

Get a prediction for a video frame from the Intel® Geti™ server, if available.

Parameters:

video_frame – VideoFrame to get the prediction for. The frame has to be present in the project on the cluster already.

Returns:

Prediction for the video frame

get_video_predictions(video: Video) List[Prediction]

Get a list of predictions for a video from the Intel® Geti™ server, if available.

Parameters:

video – Video to get the predictions for. The video has to be present in the project on the cluster already.

Returns:

List of Predictions for the video

download_predictions_for_images(images: MediaList[Image], path_to_folder: str, include_result_media: bool = True) float

Download image predictions from the server to a target folder on disk.

Parameters:
  • images – List of images for which to download the predictions

  • path_to_folder – Folder to save the predictions to

  • include_result_media – True to also download the result media belonging to the predictions, if any. False to skip downloading result media

Returns:

Returns the time elapsed to download the predictions, in seconds

download_predictions_for_videos(videos: MediaList[Video], path_to_folder: str, include_result_media: bool = True, inferred_frames_only: bool = True, frame_stride: int | None = None) float

Download predictions for a list of videos from the server to a target folder on disk.

Parameters:
  • videos – List of videos for which to download the predictions

  • path_to_folder – Folder to save the predictions to

  • include_result_media – True to also download the result media belonging to the predictions, if any. False to skip downloading result media

  • inferred_frames_only – True to only download frames that already have a prediction, False to run inference on the full video for all videos in the list. WARNING: Setting this to False may cause the download to take a long time!

  • frame_stride – Optional frame stride to use when generating predictions. This is only used when inferred_frames_only = False. If left unspecified, the frame_stride is deduced from the video

Returns:

Time elapsed to download the predictions, in seconds

download_predictions_for_video(video: Video, path_to_folder: str, include_result_media: bool = True, inferred_frames_only: bool = True, frame_stride: int | None = None) float

Download video predictions from the server to a target folder on disk.

Parameters:
  • video – Video for which to download the predictions

  • path_to_folder – Folder to save the predictions to

  • include_result_media – True to also download the result media belonging to the predictions, if any. False to skip downloading result media

  • inferred_frames_only – True to only download frames that already have a prediction, False to run inference on the full video. WARNING: Setting this to False may cause the download to take a long time!

  • frame_stride – Optional frame stride to use when generating predictions. This is only used when inferred_frames_only = False. If left unspecified, the frame_stride is deduced from the video

Returns:

Returns the time elapsed to download the predictions, in seconds

predict_image(image: Image | ndarray | PathLike | str) Prediction

Push an image to the Intel® Geti™ project and receive a prediction for it.

Note that this method will not save the image to the project.

Parameters:

image – Image object, filepath to an image or numpy array containing an image to get the prediction for

Returns:

Prediction for the image

class geti_sdk.rest_clients.model_client.ModelClient(workspace_id: str, project: Project, session: GetiSession)

Class to manage the models and model groups for a certain project

get_all_model_groups() List[ModelGroup]

Return a list of all model groups in the project.

Returns:

List of model groups in the project

get_latest_model_for_all_model_groups() List[Model]

Return the latest trained models for each model group in the project.

Returns:

List of models, one for each trained algorithm in the project.

get_model_group_by_algo_name(algorithm_name: str) ModelGroup | None

Return the model group for the algorithm named algorithm_name, if any. If no model group for this algorithm is found in the project, this method returns None

Parameters:

algorithm_name – Name of the algorithm

Returns:

ModelGroup instance corresponding to this algorithm

get_latest_model_by_algo_name(algorithm_name: str) Model | None

Return the latest model for a specific algorithm. If no model has been trained for the algorithm, this method returns None.

Parameters:

algorithm_name – Name fo the algorithm for which to return the model

Returns:

Model object respresenting the model.

get_latest_optimized_model(algorithm_name: str, optimization_type: str = 'MO', precision: str = 'FP16', require_xai: bool = False) OptimizedModel

Return the optimized model for the latest trained model for a specified algorithm. Additional parameters allow filtering on the optimization type (e.g. ‘nncf’, ‘pot’, ‘mo’, ‘onnx’), precision (‘int8’, ‘fp16’, ‘fp32’) and whether or not the model includes an XAI head for saliency map generation.

If no optimized model for the specified criteria can be found, this method raises an error

Parameters:
  • algorithm_name – Name of the algorithm to retrieve the model for

  • optimization_type – Optimization type to select. Options are ‘mo’, ‘nncf’, ‘pot’, ‘onnx’. Case insensitive. Defaults to ‘MO’

  • precision – Model precision to select. Options are ‘INT8’, ‘FP16’, ‘FP32’. Defaults to ‘FP16’

  • require_xai – If True, only select models that include an XAI head. Defaults to False

get_model_by_algorithm_task_and_version(algorithm: Algorithm, version: int | None = None, task: Task | None = None) Model | None

Retrieve a Model from the Intel® Geti™ server, corresponding to a specific algorithm and model version. If no version is passed, this method will retrieve the latest model for the algorithm.

If no model for the algorithm is available in the project, this method returns None

Parameters:
  • algorithm – Algorithm for which to get the model

  • version – Version of the model to retrieve. If left as None, returns the latest version

  • task – Task for which to get the model. If left as None, this method searches for models for algorithm in all tasks in the project

Returns:

Model object corresponding to algorithm and version, for a specific task, if any. If no model is found by those parameters, this method returns None

update_model_detail(model: Model | ModelSummary) Model

Update the model such that its details are up to date. This includes updating the list of available optimized models for the model.

Parameters:

model – Model or ModelSummary object, representing the model to update

Returns:

Model object containing the up to date details of the model

set_active_model(model: Model | ModelSummary | None = None, algorithm: Algorithm | str | None = None) None

Set the model as the active model.

Parameters:
  • model – Model or ModelSummary object representing the model to set as active

  • algorithm – Algorithm or algorithm name for which to set the model as active

Raises:

ValueError – If neither model nor algorithm is specified, If the algorithm is not supported in the project, If unable to set the active model

get_active_model_for_task(task: Task) Model | None

Return the Model details for the currently active model, for a task if any. If the task does not have any trained models, this method returns None

Parameters:

task – Task object containing details of the task to get the model for

Returns:

Model object representing the currently active model in the Intel® Geti™ project, if any

download_active_model_for_task(path_to_folder: str, task: Task) Model | None

Download the currently active model for the task. If the task does not have an active model yet, this method returns None

This method will create a directory ‘models’ in the path specified in path_to_folder

Parameters:
  • path_to_folder – Path to the target folder in which to save the active model, and all optimized models derived from it.

  • task – Task object containing details of the task to download the model for

Returns:

Model instance holding the details of the active model

get_all_active_models() List[Model | None]

Return the Model details for the active model for all tasks in the project, if the tasks have any.

This method returns a list of Models, where the index of the Model in the list corresponds to the index of the task in list of trainable tasks for the project.

If any of the tasks do not have a trained model, the entry corresponding to the index of that task will be None

Returns:

Model object representing the currently active model for the task in the Intel® Geti™ project, if any

download_all_active_models(path_to_folder: str) List[Model | None]

Download the active models for all tasks in the project.

This method will create a directory ‘models’ in the path specified in path_to_folder

Parameters:

path_to_folder – Path to the target folder in which to save the active models, and all optimized models derived from them.

Returns:

List of Model objects representing the currently active models (if any) for all tasks in the Intel® Geti™ project. The index of the Model in the list corresponds to the index of the task in the list of trainable tasks for the project.

get_model_for_job(job: Job, check_status: bool = True) Model

Return the model that was created by the job from the Intel® Geti™ server.

Parameters:
  • job – Job to retrieve the model for

  • check_status – True to first update the status of the job, to make sure it is finished. Setting this to False will not update the job status.

Returns:

Model produced by the job

get_task_for_model(model: Model | OptimizedModel) Task

Return the task to which a certain model belongs, if possible. This method only works when the model identifiers are still in place, if they have been stripped it will raise a ValueError.

If the model does not match any task in the project, this method will raise an error.

Parameters:

model – Model or OptimizedModel to find the task for

Returns:

Task for which the model was trained

optimize_model(model: Model, optimization_type: str = 'pot') Job

Start an optimization job for the specified model.

Parameters:
  • model – Model to optimize

  • optimization_type – Type of optimization to run. Currently supported values: [“pot”, “nncf”]. Case insensitive. Defaults to “pot”

Returns:

Job object referring to the optimization job running on the Intel® Geti™ server.

purge_model(model: Model | ModelSummary) None

Purge the model from the Intel® Geti™ server.

This will permanently delete all the files related to the model including base model weights, optimized model weights and exportable code for the Intel® Geti™ server.

Parameters:

model – Model to archive. Only base models are accepted, not optimized models. Note: the model must not be the latest in the model group or be the active model.

Raises:

ValueError – If the model does not have a base_url, meaning it cannot be purged from the remote server.

monitor_job(job: Job, timeout: int = 10000, interval: int = 15) Job

Monitor and print the progress of a job. Program execution is halted until the job has either finished, failed or was cancelled.

Progress will be reported in 15s intervals

Parameters:
  • job – job to monitor

  • timeout – Timeout (in seconds) after which to stop the monitoring

  • interval – Time interval (in seconds) at which the ModelClient polls the server to update the status of the jobs. Defaults to 15 seconds

Returns:

job with it’s status updated

class geti_sdk.rest_clients.training_client.TrainingClient(workspace_id: str, project: Project, session: GetiSession)

Class to manage training jobs for a certain Intel® Geti™ project.

get_status() ProjectStatus

Get the current status of the project from the Intel® Geti™ server.

Returns:

ProjectStatus object reflecting the current project status

is_training() bool

Request the project status and return True if the project is training

get_jobs(project_only: bool = True, running_only: bool = False) List[Job]

Return a list of all jobs on the Intel® Geti™ server.

If project_only = True (the default), only those jobs related to the project managed by this TrainingClient will be returned. If set to False, all jobs in the workspace are returned.

Parameters:
  • project_only – True to return only those jobs pertaining to the project for which the TrainingClient is active. False to return all jobs in the Intel® Geti™ workspace.

  • running_only – If set to True, only return those jobs that are still running. Completed or Scheduled jobs will not be included in that case

Returns:

List of Jobs

get_algorithms_for_task(task: Task | int) AlgorithmList

Return a list of supported algorithms for a specific task.

The task parameter accepts both a Task object and an integer. If an int is passed, this will be considered the index of the task in the list of trainable tasks for the project which is managed by the TrainingClient.

Parameters:

task – Task to get the supported algorithms for. If an integer is passed, this is considered the index of the task in the trainable task list of the project. So passing task=0 will return the algorithms for the first trainable task, etc.

Returns:

List of supported algorithms for the task

train_task(task: Task | int, dataset: Dataset | None = None, algorithm: Algorithm | None = None, train_from_scratch: bool = False, hyper_parameters: TaskConfiguration | None = None, hpo_parameters: Dict[str, Any] | None = None, await_running_jobs: bool = True, timeout: int = 3600) Job

Start training of a specific task in the project.

The task parameter accepts both a Task object and an integer. If an int is passed, this will be considered the index of the task in the list of trainable tasks for the project which is managed by the TrainingClient.

Parameters:
  • task – Task or index of Task to train

  • dataset – Optional Dataset to train on

  • algorithm – Optional Algorithm to use in training. If left as None (the default), the default algorithm for the task will be used.

  • train_from_scratch – True to train the model from scratch, False to continue training from an existing checkpoint (if any)

  • hyper_parameters – Optional hyper parameters to use for training

  • hpo_parameters – Optional set of parameters to use for automatic hyper parameter optimization. Only supported for version 1.1 and up

  • await_running_jobs – True to wait for currently running jobs to complete. This will guarantee that the training request can be submitted successfully. Setting this to False will cause an error to be raised when a training request is submitted for a task for which a training job is already in progress.

  • timeout – Timeout (in seconds) to wait for the Job to be created. If a training request is submitted successfully, a training job should be instantiated on the Geti server. If the Job does not appear on the server job list within the timeout, an error will be raised. This parameter only takes effect when await_running_jobs is set to True.

Returns:

The training job that has been created

monitor_jobs(jobs: List[Job], timeout: int = 10000, interval: int = 15) List[Job]

Monitor and print the progress of all jobs in the list jobs. Execution is halted until all jobs have either finished, failed or were cancelled.

Progress will be reported in 15s intervals

Parameters:
  • jobs – List of jobs to monitor

  • timeout – Timeout (in seconds) after which to stop the monitoring

  • interval – Time interval (in seconds) at which the TrainingClient polls the server to update the status of the jobs. Defaults to 15 seconds

Returns:

List of finished (or failed) jobs with their status updated

monitor_job(job: Job, timeout: int = 10000, interval: int = 15) Job

Monitor and print the progress of a job. Program execution is halted until the job has either finished, failed or was cancelled.

Progress will be reported in 15s intervals

Parameters:
  • job – job to monitor

  • timeout – Timeout (in seconds) after which to stop the monitoring

  • interval – Time interval (in seconds) at which the TrainingClient polls the server to update the status of the jobs. Defaults to 15 seconds

Returns:

job with it’s status updated

get_jobs_for_task(task: Task, running_only: bool = True) List[Job]

Return a list of current jobs for the task, if any

Parameters:
  • task – Task to retrieve the jobs for

  • running_only – True to return only jobs that are currently running, False to return all jobs (including cancelled, finished or errored jobs)

Returns:

List of Jobs running on the server for this particular task

class geti_sdk.rest_clients.deployment_client.DeploymentClient(workspace_id: str, project: Project, session: GetiSession)

Class to manage model deployment for a certain Intel® Geti™ project.

property code_deployment_url: str

Return the base URL for the code deployment group of endpoints

Returns:

URL for the code deployment endpoints for the Intel® Geti™ project

property ready_to_deploy: bool

Return True when the project is ready for deployment, False otherwise.

A project is ready for deployment when it contains at least one trained model for each task.

Returns:

True when the project is ready for deployment, False otherwise

deploy_project(output_folder: str | PathLike | None = None, models: Sequence[Model | OptimizedModel] | None = None, enable_explainable_ai: bool = False, prepare_for_ovms: bool = False) Deployment

Deploy a project by creating a Deployment instance. The Deployment contains the optimized active models for each task in the project, and can be loaded with OpenVINO to run inference locally.

The models parameter can be used in two ways:

  • If models is left as None this method will create a deployment containing the current active model for each task in the project.

  • If a list of models is passed, then it should always contain a number of Models less than or equal to the amount of tasks in the project. For instance, for a task-chain project with two tasks a list of at most two models could be passed (one for each task). If in this case a list of only one model is passed, that model will be used for its corresponding task and the method will resort to using the active model for the other task in the project.

Optionally, configuration files for deploying models to OpenVINO Model Server (OVMS) can be generated. If you want this, make sure to pass prepare_for_ovms=True as input parameter. In this case, you MUST specify an output_folder to save the deployment and OVMS configuration files. Note that this does not change the resulting deployment in itself, it can still be used locally as well.

Parameters:
  • output_folder – Path to a folder on local disk to which the Deployment should be downloaded. If no path is specified, the deployment will not be saved to disk directly. Note that it is always possible to save the deployment once it has been created, using the deployment.save method.

  • models – Optional list of models to use in the deployment. If no list is passed, this method will create a deployment using the currently active model for each task in the project.

  • enable_explainable_ai – True to include an Explainable AI head in the deployment. This will add an Explainable AI head to the model for each task in the project, allowing for the generation of saliency maps.

  • prepare_for_ovms – True to prepare the deployment to be hosted on a OpenVINO model server (OVMS). Passing True will create OVMS configuration files for the model(s) in the project and instructions with sample code showing on how to launch an OVMS container including the deployed models.

Returns:

Deployment for the project

class geti_sdk.rest_clients.credit_system_client.CreditSystemClient(session: GetiSession, workspace_id: str | None = None)

Class to work with credits in Intel Geti.

is_supported() bool

Check if the Intel Geti Platform supports Credit system.

Returns:

True if the Credit System is supported, False otherwise.

get_balance(*args, **kwargs)
get_job_cost(job: Job | str) JobCost | None

Get the cost of a job.

This method allows you to find out the cost of a training or an optimization job.

Parameters:

job – A Job object or a Job ID.

Returns:

A JobCost object presenting the total cost and the consumed credits.

get_subscriptions(*args, **kwargs)
get_credit_accounts(*args, **kwargs)