Unified Approach to Mesh Saliency : Evaluating Textured and Non-Textured Meshes Through VR and Multifunctional Prediction
Mesh saliency aims to empower artificial intelligence with strong adaptability to highlight regions that naturally attract visual attention. Existing advances primarily emphasize the crucial role of geometric shapes in determining mesh saliency, but it remains challenging to flexibly sense the uniqu...
Publié dans: | IEEE transactions on visualization and computer graphics. - 1996. - 31(2025), 5 vom: 21. Mai, Seite 3151-3160 |
---|---|
Auteur principal: | |
Autres auteurs: | , , |
Format: | Article en ligne |
Langue: | English |
Publié: |
2025
|
Accès à la collection: | IEEE transactions on visualization and computer graphics |
Sujets: | Journal Article |
Résumé: | Mesh saliency aims to empower artificial intelligence with strong adaptability to highlight regions that naturally attract visual attention. Existing advances primarily emphasize the crucial role of geometric shapes in determining mesh saliency, but it remains challenging to flexibly sense the unique visual appeal brought by the realism of complex texture patterns. To investigate the interaction between geometric shapes and texture features in visual perception, we establish a comprehensive mesh saliency dataset, capturing saliency distributions for identical 3D models under both non-textured and textured conditions. Additionally, we propose a unified saliency prediction model applicable to various mesh types, providing valuable insights for both detailed modeling and realistic rendering applications. This model effectively analyzes the geometric structure of the mesh while seamlessly incorporating texture features into the topological framework, ensuring coherence throughout appearance-enhanced modeling. Through extensive theoretical and empirical validation, our approach not only enhances performance across different mesh types, but also demonstrates the model's scalability and generalizability, particularly through cross-validation of various visual features |
---|---|
Description: | Date Revised 28.04.2025 published: Print-Electronic Citation Status PubMed-not-MEDLINE |
ISSN: | 1941-0506 |
DOI: | 10.1109/TVCG.2025.3549550 |