otx.api.entities.resultset#
This module implements the ResultSet entity.
Classes
|
ResultsetEntity. |
|
This defines the purpose of the resultset. |
- class otx.api.entities.resultset.ResultSetEntity(model: ModelEntity, ground_truth_dataset: DatasetEntity, prediction_dataset: DatasetEntity, purpose: ResultsetPurpose = ResultsetPurpose.EVALUATION, performance: Performance | None = None, creation_date: datetime | None = None, id: ID | None = None)[source]#
Bases:
object
ResultsetEntity.
- It aggregates:
the dataset containing ground truth (based on user annotations)
the dataset containing predictions for the above ground truth dataset
In addition, it links to the model which computed the predictions, as well as the performance of this model on the ground truth dataset.
- Parameters:
model – the model using which the prediction_dataset has been generated
ground_truth_dataset – the dataset containing ground truth annotation
prediction_dataset – the dataset containing prediction
purpose – see
ResultsetPurpose
performance – the performance of the model on the ground truth dataset
creation_date – the date time which the resultset is created. Set to None to set this to
id – the id of the resultset. Set to ID() so that a new unique ID will be assigned upon saving. If the argument is None, it will be set to ID()
datetime.now(datetime.timezone.utc)
- has_score_metric() bool [source]#
Returns True if the resultset contains non-null performance and score value.
- property ground_truth_dataset: DatasetEntity#
Returns the ground truth dataset that is used in the ResultSet.
- property model: ModelEntity#
Returns the model that is used for the ResultSet.
- property performance: Performance#
Returns the performance of the model on the ground truth dataset.
- property prediction_dataset: DatasetEntity#
Returns the prediction dataset that is used in the ResultSet.
- property purpose: ResultsetPurpose#
Returns the purpose of the ResultSet, for example ResultSetPurpose.EVALUATION.
- class otx.api.entities.resultset.ResultsetPurpose(value)[source]#
Bases:
Enum
This defines the purpose of the resultset.
EVALUATION denotes resultsets generated at Evaluation stage on validation subset.
TEST denotes resultsets generated at Evaluation stage on test subset.
PREEVALUATION denotes resultsets generated at Preevaluation stage (e.g., train from scratch) onn validation subset.