Semantic Segmentation#

Semantic segmentation is a computer vision task in which an algorithm assigns a label or class to each pixel in an image. For example, semantic segmentation can be used to identify the boundaries of different objects in an image, such as cars, buildings, and trees. The output of semantic segmentation is typically an image where each pixel is colored with a different color or label depending on its class.

image uploaded from this `source <https://arxiv.org/abs/1912.03183>`_

We solve this task by utilizing FCN Head with implementation from MMSegmentation on the multi-level image features obtained by the feature extractor backbone (Lite-HRNet). For the supervised training we use the following algorithms components:

  • Augmentations: Besides basic augmentations like random flip, random rotate and random crop, we use mixing images technique with different photometric distortions.

  • Optimizer: We use Adam optimizer with weight decay set to zero and gradient clipping with maximum quadratic norm equals to 40.

  • Learning rate schedule: For scheduling training process we use ReduceLROnPlateau with linear learning rate warmup for 100 iterations. This method monitors a target metric (in our case we use metric on the validation set) and if no improvement is seen for a patience number of epochs, the learning rate is reduced.

  • Loss function: We use standard Cross Entropy Loss to train a model.

  • Additional training techniques
    • Early stopping: To add adaptability to the training pipeline and prevent overfitting. You can use early stopping like the below command.

      $ otx train {TEMPLATE} ... \
                  params \
                  --learning_parameters.enable_early_stopping=True
      

Dataset Format#

For the dataset handling inside OpenVINO™ Training Extensions, we use Dataset Management Framework (Datumaro).

At this end we support Common Semantic Segmentation data format. If you organized supported dataset format, starting training will be very simple. We just need to pass a path to the root folder and desired model template to start training:

$ otx train --template <model_template> --train-data-roots <path_to_data_root> \
                                        --val-data-roots <path_to_data_root>

Note

Please, refer to our dedicated tutorial for more information on how to train, validate and optimize semantic segmentation model for more details.

Models#

We support the following ready-to-use model templates:

Template ID

Name

Complexity (GFLOPs)

Model size (MB)

Custom_Semantic_Segmentation_Lite-HRNet-s-mod2_OCR

Lite-HRNet-s-mod2

1.44

3.2

Custom_Semantic_Segmentation_Lite-HRNet-18-mod2_OCR

Lite-HRNet-18-mod2

2.82

4.3

Custom_Semantic_Segmentation_Lite-HRNet-x-mod3_OCR

Lite-HRNet-x-mod3

9.20

5.7

All of these models are members of the same Lite-HRNet backbones family. They differ in the trade-off between accuracy and inference/training speed. Lite-HRNet-x-mod3 is the template with heavy-size architecture for accurate predictions but it requires long training. Whereas the Lite-HRNet-s-mod2 is the lightweight architecture for fast inference and training. It is the best choice for the scenario of a limited amount of data. The Lite-HRNet-18-mod2 model is the middle-sized architecture for the balance between fast inference and training time.

Semi-supervised Learning#

To solve Semi-supervised learning problem for the semantic segmentation we use the Mean Teacher algorithm.

The basic idea of this approach is to use two models during training: a “student” model, which is the main model being trained, and a “teacher” model, which acts as a guide for the student model. The student model is updated based on the ground truth annotations (for the labeled data) and pseudo-labels (for the unlabeled data) which are the predictions of the teacher model. The teacher model is updated based on the moving average of the student model’s parameters. So, we don’t use backward loss propagation for the teacher model’s parameters. After training, only the student model is used for prediction.

We utilize the same core algorithm’s parameters as for the supervised pipeline. The main difference is to use of different augmentation pipelines for the labeled and unlabeled data. We use only basic augmentations (random flip, random rotate, random crop) for the labeled data and stronger for the unlabeled (color distortion). It helps with a better generalization and prevents unnecessary overfitting on the pseudo-labels generated by the teacher model.

Self-supervised Learning#

Self-supervised learning can be one of the solutions if the user has a small data set, but label information is not yet available. General self-supervised Learning in academia is commonly used to obtain well-pretrained weights from a source dataset without label information. However, in real-world industries, it is difficult to apply because of small datasets, limited resources, or training in minutes.

For these cases, OpenVINO™ Training Extensions provides improved self-supervised learning recipes that can be applied to the above harsh environments. We adapted DetCon as our self-supervised method. It takes some time to use these self-supervised learning recipes, but you can expect improved performance, especially in small-data regimes.

The below table shows how much performance (mDice) self-supervised methods improved compared with baseline performance on the subsets of Pascal VOC 2012 with three classes (person, car, bicycle). To get the below performance, we had two steps:

  • Train the models using only images containing at less one class of the three classes without label information to get pretrained weights for a few epochs.

  • Fine-tune the models with pretrained weights using subset datasets and get performance.

We additionally obtained baseline performance from supervised learning using subset datasets for comparison. Each subset dataset has 8, 16, and 24 images, respectively.

Model name

#8

#16

#24

SL

Self-SL

SL

Self-SL

SL

Self-SL

Lite-HRNet-s-mod2

48.30

53.55

57.08

58.96

62.40

63.46

Lite-HRNet-18-mod2

53.47

49.20

56.69

58.72

62.81

63.63

Lite-HRNet-x-mod3

50.23

50.93

60.09

61.61

62.66

64.87

Unlike other tasks, two things are considered to use self-supervised learning:

  • --train-data-roots must be set to a directory only containing images, not ground truths. DetCon uses pseudo masks created in detcon_mask directory for training. If they are not created yet, they will be created first.

  • --val-data-roots is not needed.

To enable self-supervised training, the command below can be executed:

$ otx train otx/algorithms/segmentation/configs/ocr_lite_hrnet_s_mod2/template.yaml \
            --train-data-roots=tests/assets/common_semantic_segmentation_dataset/train/images \
            params \
            --algo_backend.train_type=Selfsupervised

After self-supervised training, pretrained weights can be use for supervised (incremental) learning like the below command:

$ otx train otx/algorithms/segmentation/configs/ocr_lite_hrnet_s_mod2/template.yaml \
            --train-data-roots=tests/assets/common_semantic_segmentation_dataset/train \
            --val-data-roots=tests/assets/common_semantic_segmentation_dataset/val \
            --load-weights={PATH/PRETRAINED/WEIGHTS}

Note

SL stands for Supervised Learning.

Supervised Contrastive Learning#

To enhance the performance of the algorithm in case when we have a small number of data, Supervised Contrastive Learning (SupCon) can be used.

More specifically, we train a model with two heads: segmentation head with Cross Entropy Loss and contrastive head with DetCon loss. The below table shows how much performance (mDice) SupCon improved compared with baseline performance on the subsets of Pascal VOC 2012 with three classes (person, car, bicycle). Each subset dataset has 8, 16, and 24 images, respectively.

Model name

#8

#16

#24

SL

SupCon

SL

SupCon

SL

SupCon

Lite-HRNet-s-mod2

48.30

51.83

57.08

59.26

62.40

63.39

Lite-HRNet-18-mod2

53.47

54.90

56.69

60.32

62.81

64.56

Lite-HRNet-x-mod3

53.71

54.83

58.43

62.03

64.72

64.57

The SupCon training can be launched by adding additional option to template parameters like the below. It can be launched only with supervised (incremental) training type.

$ otx train otx/algorithms/segmentation/configs/ocr_lite_hrnet_s_mod2/template.yaml \
            --train-data-roots=tests/assets/common_semantic_segmentation_dataset/train \
            --val-data-roots=tests/assets/common_semantic_segmentation_dataset/val \
            params \
            --learning_parameters.enable_supcon=True

Note

SL stands for Supervised Learning.