Turning a CLIP Model Into a Scene Text Spotter

We exploit the potential of the large-scale Contrastive Language-Image Pretraining (CLIP) model to enhance scene text detection and spotting tasks, transforming it into a robust backbone, FastTCM-CR50. This backbone utilizes visual prompt learning and cross-attention in CLIP to extract image and tex...

Description complète

Détails bibliographiques
Publié dans:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 46(2024), 9 vom: 20. Sept., Seite 6040-6054
Auteur principal: Yu, Wenwen (Auteur)
Autres auteurs: Liu, Yuliang, Zhu, Xingkui, Cao, Haoyu, Sun, Xing, Bai, Xiang
Format: Article en ligne
Langue:English
Publié: 2024
Accès à la collection:IEEE transactions on pattern analysis and machine intelligence
Sujets:Journal Article