Home Computer Vision Polygon Annotations for Object Detection

Polygon Annotations for Object Detection

Polygon Annotations for Object Detection


On this weblog publish, we are going to discover how one can enhance your object detection mannequin efficiency by changing your bounding field annotations to polygon annotations. We will even focus on using augmentations with polygon annotations and utilizing pretrained weights to additional increase efficiency. By the tip of this publish, you’ll have highly effective instruments at your disposal to reinforce the accuracy of your object detection fashions. You may enhance your object detection mannequin efficiency by important quantity utilizing these methods!

Moreover, you may have entry to the accompanying code to check the comparisons and additional discover the outcomes. The code permits for a hands-on expertise, enabling you to duplicate the experiments and analyze the efficiency of various annotation sorts, pretrained weights initialization, and the affect of augmentations on a mannequin’s efficiency.


You’ll find the code right here to additional discover the outcomes!

Why Examine Bounding Containers to Polygons?

The accuracy and efficiency of object detection fashions are depending on the standard of annotations used throughout the coaching course of. Historically, bounding field annotations have lengthy been favored as a result of their simplicity and ease of software. Nevertheless, this comfort comes with a tradeoff. Bounding packing containers seize objects with house round them, which may end up in much less exact localization and probably hinder the mannequin’s efficiency, particularly in conditions the place objects have irregular or advanced shapes.

To beat this limitation, different annotation methods, equivalent to polygons, have emerged. These methods, such because the Section Something Mannequin (SAM) developed by Meta AI, permit for extra correct and detailed object segmentations, enabling higher efficiency, significantly in eventualities involving objects with irregular shapes. Though labeling information utilizing polygons might require further effort and time, it gives the benefit of capturing objects extra exactly and might result in improved ends in object detection duties. Utilizing instruments like Roboflow’s Good Polygon function (powered by SAM) drastically accelerates the method of annotating information with polygons.

The Experiment: Polygon vs Bounding Field Annotations

All through all our experiments, we maintained consistency when it comes to the chosen mannequin, parameters, and dataset. We centered on a dataset particularly curated for hearth hydrants. You may obtain the each dataset utilizing the beneath hyperlink.

The dataset comprised 408 unique photographs together with 570 augmented photographs, catering to each bounding field and polygon annotations. It is value noting that the outcomes might range relying on the traits of customized datasets, equivalent to their dimension, high quality, class distribution, and domain-specific nuances.

The standard and amount of the dataset have a big affect on the efficiency of object detection fashions. Excessive-quality annotations, with correct object boundaries and exact labeling, play a vital position in successfully coaching the mannequin. Conversely, inconsistent and incomplete annotations can hinder the mannequin’s studying functionality and its potential to generalize.

In case you have a dataset annotated with bounding packing containers and wish to convert them into occasion segmentation annotation labels, use our SAM tutorial and pocket book to transform the dataset.

For our experiments, we utilized in style and efficient structure for object detection: Ultralytics YOLOv8. We used Roboflow to obtain the datasets with each bounding field and polygon annotations and skilled the fashions from scratch, utilizing the supplied configuration file, yolov8n.yaml, and the respective dataset for every annotation sort.

To coach the fashions, we employed the YOLOv8 structure for each the bounding field and polygon datasets. We initiated the coaching course of from scratch and the fashions have been skilled for a complete of 80 epochs, making certain ample studying utilizing this code.

Mannequin Analysis and Efficiency Metrics

We evaluated the efficiency of the fashions utilizing these major metrics: mAP50, mAP, and normalized confusion matrix. The mAP50 represents the imply common precision at an Intersection over Union (IoU) threshold of 0.50. Moreover, we analyzed precision, recall, true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN) to realize a deeper understanding of the mannequin’s efficiency.


We current the outcomes of our experiments within the desk beneath:

Desk 1: Efficiency Comparability of Bounding Field vs Polygon Annotations utilizing YOLOv8 CLI

From the supplied desk, we will draw conclusions. Lets dive in to what this implies to your subsequent venture.

Polygon Annotations Enhance Efficiency In comparison with Bounding Field Annotations

Within the desk, we see that when evaluating the outcomes for a similar mannequin setup and parameters, the polygon annotations constantly obtain larger mAP50 values in comparison with bounding field annotations.

Pretrained Weights Initialization Usually Improves Efficiency

We are able to see that utilizing pretrained weights for initialization usually improves the efficiency. The mAP50 values for each annotation sorts are larger when pretrained weights are used in comparison with the scratch mannequin initialization.

Augmentations Improve Mannequin Efficiency

We observe that making use of augmentations (Rotation, Saturation, Cutout, Bounding Field Shear) improves the efficiency. The mAP50 values for each annotation sorts enhance when augmentations are utilized.

Polygons profit extra from augmentations in comparison with bounding packing containers as a result of polygons precisely characterize object form, permitting for exact adaptation to transformations like rotation and scaling. Polygons can keep localization accuracy and deal with advanced shapes, enabling the mannequin to be taught from numerous examples and enhance efficiency in object detection duties. This helps in enhancing the mannequin’s potential to deal with variations in object look, place, and orientation.

It is essential to notice that these conclusions are primarily based on the data supplied within the tables and will not cowl all doable eventualities. Additional evaluation and experimentation is required to validate the sample throughout several types of fashions, datasets and parameters.


On this weblog publish, we explored the affect of polygon annotations on the efficiency of object detection fashions. Our experiments demonstrated that utilizing polygon annotations can result in improved accuracy in comparison with fashions skilled with conventional bounding field annotations, significantly in eventualities the place objects have irregular shapes.

Moreover, we leveraged augmentations to reinforce the efficiency of fashions skilled with polygon annotations. By introducing further variations and challenges to the coaching information, the fashions turned extra strong and achieved even larger accuracy. The augmentations, equivalent to rotation, saturation, cutout, and bounding field shear, additional improved the fashions’ potential to generalize to real-world eventualities.

By adopting polygon annotations and using augmentations, you may leverage the facility of exact object illustration and numerous coaching information to spice up the efficiency of your object detection fashions. These methods open up new avenues for enhancing the accuracy and reliability of laptop imaginative and prescient techniques, enabling a variety of functions in fields equivalent to autonomous driving, robotics, and surveillance.

So, improve your object detection fashions and comfortable engineering!

Arty Ariuntuya.” Roboflow Weblog, Jul 19, 2023. https://weblog.roboflow.com/polygon-vs-bounding-box-computer-vision-annotation/



Please enter your comment!
Please enter your name here