Cyclic Self-Training with Proposal Weight Modulation for Cross-Supervised Object Detection
Weakly-supervised object detection (WSOD), which requires only image-level annotations for training detectors, has gained enormous attention. Despite recent rapid advance in WSOD, there remains a large performance gap compared with fully-supervised object detection. To narrow the performance gap, we...
Veröffentlicht in: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - PP(2023) vom: 29. März |
---|---|
1. Verfasser: | |
Weitere Verfasser: | , , |
Format: | Online-Aufsatz |
Sprache: | English |
Veröffentlicht: |
2023
|
Zugriff auf das übergeordnete Werk: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society |
Schlagworte: | Journal Article |
Zusammenfassung: | Weakly-supervised object detection (WSOD), which requires only image-level annotations for training detectors, has gained enormous attention. Despite recent rapid advance in WSOD, there remains a large performance gap compared with fully-supervised object detection. To narrow the performance gap, we study cross-supervised object detection (CSOD), where existing classes (base classes) have instance-level annotations while newly added classes (novel classes) only need image-level annotations. For improving localization accuracy, we propose a Cyclic Self-Training (CST) method to introduce instance-level supervision into a commonly used WSOD method, online instance classifier refinement (OICR). Our proposed CST consists of forward pseudo labeling and backward pseudo labeling. Specifically, OICR exploits the forward pseudo labeling to generate pseudo ground-truth bounding-boxes for all classes, thus enabling instance classifier training. Then, the backward pseudo labeling is designed to generate pseudo ground-truth bounding-boxes of higher quality for novel classes by fusing the predictions of the instance classifiers. As a result, both novel and base classes will have bounding-box annotations for training, alleviating the supervision inconsistency between base and novel classes. In the forward pseudo labeling, the generated pseudo ground-truths may be misaligned with objects and thus introduce poor-quality examples for training the ICs. To reduce the impacts of these poor-quality training examples, we propose a Proposal Weight Modulation (PWM) module learned in a class-agnostic and contrastive manner by exploiting bounding-box annotations of base classes. Experiments on PASCAL VOC and MS COCO datasets demonstrate the superiority of our proposed method |
---|---|
Beschreibung: | Date Revised 04.04.2023 published: Print-Electronic Citation Status Publisher |
ISSN: | 1941-0042 |
DOI: | 10.1109/TIP.2023.3261752 |