Learning Saliency From Single Noisy Labelling : A Robust Model Fitting Perspective

The advances made in predicting visual saliency using deep neural networks come at the expense of collecting large-scale annotated data. However, pixel-wise annotation is labor-intensive and overwhelming. In this paper, we propose to learn saliency prediction from a single noisy labelling, which is...

Description complète

Détails bibliographiques
Publié dans:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 43(2021), 8 vom: 22. Aug., Seite 2866-2873
Auteur principal: Zhang, Jing (Auteur)
Autres auteurs: Dai, Yuchao, Zhang, Tong, Harandi, Mehrtash, Barnes, Nick, Hartley, Richard
Format: Article en ligne
Langue:English
Publié: 2021
Accès à la collection:IEEE transactions on pattern analysis and machine intelligence
Sujets:Journal Article Research Support, Non-U.S. Gov't
Description
Résumé:The advances made in predicting visual saliency using deep neural networks come at the expense of collecting large-scale annotated data. However, pixel-wise annotation is labor-intensive and overwhelming. In this paper, we propose to learn saliency prediction from a single noisy labelling, which is easy to obtain (e.g., from imperfect human annotation or from unsupervised saliency prediction methods). With this goal, we address a natural question: Can we learn saliency prediction while identifying clean labels in a unified framework? To answer this question, we call on the theory of robust model fitting and formulate deep saliency prediction from a single noisy labelling as robust network learning and exploit model consistency across iterations to identify inliers and outliers (i.e., noisy labels). Extensive experiments on different benchmark datasets demonstrate the superiority of our proposed framework, which can learn comparable saliency prediction with state-of-the-art fully supervised saliency methods. Furthermore, we show that simply by treating ground truth annotations as noisy labelling, our framework achieves tangible improvements over state-of-the-art methods
Description:Date Completed 29.09.2021
Date Revised 29.09.2021
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1939-3539
DOI:10.1109/TPAMI.2020.3046486