the proposed method is based on the Faster RCNN, this section also
selected Faster RCNN as the baseline of PN-based two-stage detectors.
For
the fair comparison, we chosen two PU-based detectors that adopted
Faster RCNN as the base model. These two methods are detailed in Section
3.4. The first one is Pi-GS (Grid Search) [49], which estimates the
class prior probability by conducting a grid search on the validation
set with a search interval of 0.1 and repeating training 10 times. The
second method is Pi-FT (Fixed Threshold) [34], which utilizes a
fixed threshold to filter out the positive anchors by comparing their
confidence scores with this
threshold. In summary, we use M-YOLO
v3, D-YOLO v3, M-YOLO v4, D-YOLO v4, D-YOLOv5, Pi-GS, and Pi-FT as the
comparison methods.
4.3 Evaluation metrics
In this paper, we introduce the COCO evaluation metrics, a popular
metrics for object detection, from the COCO challenge [50]. The COCO
metrics is designed based on the principal metric, mAP@T , which
stands for the mAP with IoU threshold equaling T . For example,
AP@0.5 and AP@0.75 are the typical mAP metrics, provided by the COCO
metrics. AP in COCO metrics represents the average of mAP with IoU
threshold varying from 0.5 to 0.95 (interval 0.05 ). Finally,
AP-category is the AP being applied to one particular category, such as
AP-String, APGood, AP-Broken, and AP-Flashover-Damage (shorted as
AP-FlashoverD) in our experiments.
4.4 Detection results
This
section presents the detection results of different methods under 1.0,
0.7, 0.5, and 0.3 Annotation PerCent (APC), as shown in Tables 2-5.
Meanwhile, Figures 8-9 visualize the detection results of different
methods, providing a more intuitive display of the results. In detail,
Figure 8 (I) and (II) depict the detection results under 1.0 and 0.7
APCs, respectively. Figures 9 (I)
and (II) individually present the detection results under 0.5 and 0.3
APCs.
4.4.1 Detection results with IDID’s
annotations
The
IDID dataset is used as a fully annotated dataset in reality. But there
are some samples with missing annotations due to the dense arrangement
of insulators and oversights by annotators. Some examples of missing
annotations are shown in Section 4.1.1. Therefore, even though all the
IDID’s annotations are used, it still belongs to incomplete annotation
setting. We constructed the first experiment under all annotations
provided by the IDID dataset. The detection results are summarized in
Table 2, and the visualization of prediction results are shown in Figure
8 (I).