On choosing mixture components via non-local priors

Choosing the number of mixture components remains an elusive challenge. Model selection criteria can be either overly liberal or conservative and return poorly separated components of limited practical use. We formalize non-local priors (NLPs) for mixtures and show how they lead to well-separated co...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:Journal of the Royal Statistical Society. Series B (Statistical Methodology). - Blackwell Publishers. - 81(2019), 5, Seite 809-837
1. Verfasser: Fúquene, Jairo (VerfasserIn)
Weitere Verfasser: Steel, Mark, Rossell, David
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2019
Zugriff auf das übergeordnete Werk:Journal of the Royal Statistical Society. Series B (Statistical Methodology)
Beschreibung
Zusammenfassung:Choosing the number of mixture components remains an elusive challenge. Model selection criteria can be either overly liberal or conservative and return poorly separated components of limited practical use. We formalize non-local priors (NLPs) for mixtures and show how they lead to well-separated components with non-negligible weight, interpretable as distinct subpopulations. We also propose an estimator for posterior model probabilities under local priors and NLPs, showing that Bayes factors are ratios of posterior-to-prior empty cluster probabilities. The estimator is widely applicable and helps to set thresholds to drop unoccupied components in overfitted mixtures. We suggest default prior parameters based on multimodality for normal–T-mixtures and minimal informativeness for categorical outcomes. We characterize theoretically the NLP-induced sparsity, derive tractable expressions and algorithms. We fully develop normal, binomial and product binomial mixtures but the theory, computation and principles hold more generally. We observed a serious lack of sensitivity of the Bayesian information criterion, insufficient parsimony of the Akaike information criterion and a local prior, and a mixed behaviour of the singular Bayesian information criterion. We also considered overfitted mixtures; their performance was competitive but depended on tuning parameters. Under our default prior elicitation NLPs offered a good compromise between sparsity and power to detect meaningfully separated components.
ISSN:14679868