Joint Image Deconvolution and Separation Using Mixed Dictionaries
The task of separating an image into distinct components that represent different features plays an important role in many applications. Traditionally, such separation techniques are applied once the image in question has been reconstructed from measured data. We propose an efficient iterative algor...
Veröffentlicht in: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 28(2019), 8 vom: 07. Aug., Seite 3936-3945 |
---|---|
1. Verfasser: | |
Weitere Verfasser: | , , , |
Format: | Online-Aufsatz |
Sprache: | English |
Veröffentlicht: |
2019
|
Zugriff auf das übergeordnete Werk: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society |
Schlagworte: | Journal Article |
Zusammenfassung: | The task of separating an image into distinct components that represent different features plays an important role in many applications. Traditionally, such separation techniques are applied once the image in question has been reconstructed from measured data. We propose an efficient iterative algorithm, where reconstruction is performed jointly with the task of separation. A key assumption is that the image components have different sparse representations. The algorithm is based on a scheme that minimizes a functional composed of a data discrepancy term and the l1 -norm of the coefficients of the different components with respect to their corresponding dictionaries. The performance is demonstrated for joint 2D deconvolution and separation into curve- and point-like components, and tests are performed on synthetic data as well as experimental stimulated emission depletion and confocal microscopy data. Experiments show that such a joint approach outperforms a sequential approach, where one first deconvolves data and then applies image separation |
---|---|
Beschreibung: | Date Revised 24.06.2019 published: Print-Electronic Citation Status PubMed-not-MEDLINE |
ISSN: | 1941-0042 |
DOI: | 10.1109/TIP.2019.2903316 |