Feature Selection Based on Intrusive Outliers Rather Than All Instances

Feature selection (FS) has recently attracted considerable attention in many fields. Highly-overlapping classes and skewed distributions of data within classes have been found in various classification tasks. Most existing FS methods are all instance-based, which ignores the significant differences...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 33(2024) vom: 13., Seite 809-824
1. Verfasser: Yuan, Lixin (VerfasserIn)
Weitere Verfasser: Mei, Cheng, Wang, Wenhai, Lu, Tong
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:Feature selection (FS) has recently attracted considerable attention in many fields. Highly-overlapping classes and skewed distributions of data within classes have been found in various classification tasks. Most existing FS methods are all instance-based, which ignores the significant differences in characteristics between the particular outliers and the main body of the class, causing confusion for classifiers. In this paper, we propose a novel supervised FS method, Intrusive Outliers-based Feature Selection (IOFS), to find out what kind of outliers lead to misclassification and exploit the characteristics of such outliers. In order to accurately identify the intrusive outliers (IOs), we provide a density-mean center algorithm to obtain the appropriate representative of a class. A special distance threshold is given to obtain the candidate for IOs. Combining with several metrics, mathematical formulations are provided to evaluate the overlapping degree of the intrusive class pairs. Features with high overlapping degrees are assigned to low rankings in IOFS method. An extension of IOFS based on a small number of extreme IOs, called E-IOFS, is also proposed. Three theoretical proofs are provided for the essential theoretical basis of IOFS. Experiments comparing against various state-of-the-art methods on eleven benchmark datasets show that IOFS is rational and effective, especially on the datasets with higher overlapping classes. And E-IOFS almost always outperforms IOFS
Beschreibung:Date Revised 22.01.2024
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1941-0042
DOI:10.1109/TIP.2023.3348992