User-friendly interactive image segmentation through unified combinatorial user inputs

One weakness in the existing interactive image segmentation algorithms is the lack of more intelligent ways to understand the intention of user inputs. In this paper, we advocate the use of multiple intuitive user inputs to better reflect a user's intention. In particular, we propose a constrai...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 19(2010), 9 vom: 01. Sept., Seite 2470-9
1. Verfasser: Yang, Wenxian (VerfasserIn)
Weitere Verfasser: Cai, Jianfei, Zheng, Jianmin, Luo, Jiebo
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2010
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article Research Support, Non-U.S. Gov't
Beschreibung
Zusammenfassung:One weakness in the existing interactive image segmentation algorithms is the lack of more intelligent ways to understand the intention of user inputs. In this paper, we advocate the use of multiple intuitive user inputs to better reflect a user's intention. In particular, we propose a constrained random walks algorithm that facilitates the use of three types of user inputs: 1) foreground and background seed input, 2) soft constraint input, and 3) hard constraint input, as well as their combinations. The foreground and background seed input allows a user to draw strokes to specify foreground and background seeds. The soft constraint input allows a user to draw strokes to indicate the region that the boundary should pass through. The hard constraint input allows a user to specify the pixels that the boundary must align with. Our proposed method supports all three types of user inputs in one coherent computational framework consisting of a constrained random walks and a local editing algorithm, which allows more precise contour refinement. Experimental results on two benchmark data sets show that the proposed framework is highly effective and can quickly and accurately segment a wide variety of natural images with ease
Beschreibung:Date Completed 29.11.2010
Date Revised 18.08.2010
published: Print
Citation Status PubMed-not-MEDLINE
ISSN:1941-0042
DOI:10.1109/TIP.2010.2048611