Sweep¶
Utils for Benchmarking and Sweep.
- anomalib.utils.sweep.flatten_sweep_params(params_dict: DictConfig) DictConfig [source]¶
Flatten the nested parameters section of the config object.
We need to flatten the params so that all the nested keys are concatenated into a single string. This is useful when - We need to do a cartesian product of all the combinations of the configuration for grid search. - Save keys as headers for csv - Add the config to wandb sweep.
- Parameters:
params_dict – DictConfig: The dictionary containing the hpo parameters in the original, nested, structure.
- Returns:
flattened version of the parameter dictionary.
- anomalib.utils.sweep.get_openvino_throughput(model_path: str | pathlib.Path, test_dataset: Dataset) float [source]¶
Runs the generated OpenVINO model on a dummy dataset to get throughput.
- Parameters:
model_path (str, Path) – Path to folder containing the OpenVINO models. It then searches model.xml in folder.
test_dataset (Dataset) – The test dataset used as a reference for the mock dataset.
- Returns:
Inference throughput
- Return type:
float
- anomalib.utils.sweep.get_run_config(params_dict: DictConfig) Generator[DictConfig, None, None] [source]¶
Yields configuration for a single run.
- Parameters:
params_dict (DictConfig) – Configuration for grid search.
Example
>>> dummy_config = DictConfig({ "parent1":{ "child1": ['a', 'b', 'c'], "child2": [1, 2, 3] }, "parent2":['model1', 'model2'], "parent3": 'replacement_value' }) >>> for run_config in get_run_config(dummy_config): >>> print(run_config) {'parent1.child1': 'a', 'parent1.child2': 1, 'parent2': 'model1', 'parent3': 'replacement_value'} {'parent1.child1': 'a', 'parent1.child2': 1, 'parent2': 'model2', 'parent3': 'replacement_value'} {'parent1.child1': 'a', 'parent1.child2': 2, 'parent2': 'model1', 'parent3': 'replacement_value'} ...
- Yields:
Generator[DictConfig] – Dictionary containing flattened keys and values for current run.
- anomalib.utils.sweep.get_sweep_callbacks(config: omegaconf.dictconfig.DictConfig | omegaconf.listconfig.ListConfig) list[pytorch_lightning.callbacks.callback.Callback] [source]¶
Gets callbacks relevant to sweep.
- Parameters:
config (DictConfig | ListConfig) – Model config loaded from anomalib
- Returns:
List of callbacks
- Return type:
list[Callback]
- anomalib.utils.sweep.get_torch_throughput(model_path: str | pathlib.Path, test_dataset: Dataset, device: str) float [source]¶
Tests the model on dummy data. Images are passed sequentially to make the comparision with OpenVINO model fair.
- Parameters:
model_path (str, Path) – Path to folder containing the Torch models.
test_dataset (Dataset) – The test dataset used as a reference for the mock dataset.
device (str) – Device to use for inference. Options are auto, cpu, gpu, cuda.
- Returns:
Inference throughput
- Return type:
float
- anomalib.utils.sweep.set_in_nested_config(config: DictConfig, keymap: list, value: Any) None [source]¶
Set an item in a nested config object using a list of keys.
- Parameters:
config – DictConfig: nested DictConfig object
keymap – list[str]: list of keys corresponding to item that should be set.
value – Any: Value that should be assigned to the dictionary item at the specified location.
Example
>>> dummy_config = DictConfig({ "parent1":{ "child1": ['a', 'b', 'c'], "child2": [1, 2, 3] }, "parent2":['model1', 'model2'] }) >>> model_config = DictConfig({ "parent1":{ "child1": 'e', "child2": 4, }, "parent3": False }) >>> for run_config in get_run_config(dummy_config): >>> print("Original model config", model_config) >>> print("Suggested config", run_config) >>> for param in run_config.keys(): >>> set_in_nested_config(model_config, param.split('.'), run_config[param]) >>> print("Replaced model config", model_config) >>> break Original model config {'parent1': {'child1': 'e', 'child2': 4}, 'parent3': False} Suggested config {'parent1.child1': 'a', 'parent1.child2': 1, 'parent2': 'model1'} Replaced model config {'parent1': {'child1': 'a', 'child2': 1}, 'parent3': False, 'parent2': 'model1'}