AutoNet-Generated Deep Layer-Wise Convex Networks for ECG Classification

The design of neural networks typically involves trial-and-error, a time-consuming process for obtaining an optimal architecture, even for experienced researchers. Additionally, it is widely accepted that loss functions of deep neural networks are generally non-convex with respect to the parameters...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 46(2024), 10 vom: 09. Sept., Seite 6542-6558
1. Verfasser: Shen, Yanting (VerfasserIn)
Weitere Verfasser: Lu, Lei, Zhu, Tingting, Wang, Xinshao, Clifton, Lei, Chen, Zhengming, Clarke, Robert, Clifton, David A
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
LEADER 01000caa a22002652 4500
001 NLM370023080
003 DE-627
005 20240909233440.0
007 cr uuu---uuuuu
008 240323s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2024.3378843  |2 doi 
028 5 2 |a pubmed24n1528.xml 
035 |a (DE-627)NLM370023080 
035 |a (NLM)38512733 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Shen, Yanting  |e verfasserin  |4 aut 
245 1 0 |a AutoNet-Generated Deep Layer-Wise Convex Networks for ECG Classification 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 09.09.2024 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a The design of neural networks typically involves trial-and-error, a time-consuming process for obtaining an optimal architecture, even for experienced researchers. Additionally, it is widely accepted that loss functions of deep neural networks are generally non-convex with respect to the parameters to be optimised. We propose the Layer-wise Convex Theorem to ensure that the loss is convex with respect to the parameters of a given layer, achieved by constraining each layer to be an overdetermined system of non-linear equations. Based on this theorem, we developed an end-to-end algorithm (the AutoNet) to automatically generate layer-wise convex networks (LCNs) for any given training set. We then demonstrate the performance of the AutoNet-generated LCNs (AutoNet-LCNs) compared to state-of-the-art models on three electrocardiogram (ECG) classification benchmark datasets, with further validation on two non-ECG benchmark datasets for more general tasks. The AutoNet-LCN was able to find networks customised for each dataset without manual fine-tuning under 2 GPU-hours, and the resulting networks outperformed the state-of-the-art models with fewer than 5% parameters on all the above five benchmark datasets. The efficiency and robustness of the AutoNet-LCN markedly reduce model discovery costs and enable efficient training of deep learning models in resource-constrained settings 
650 4 |a Journal Article 
700 1 |a Lu, Lei  |e verfasserin  |4 aut 
700 1 |a Zhu, Tingting  |e verfasserin  |4 aut 
700 1 |a Wang, Xinshao  |e verfasserin  |4 aut 
700 1 |a Clifton, Lei  |e verfasserin  |4 aut 
700 1 |a Chen, Zhengming  |e verfasserin  |4 aut 
700 1 |a Clarke, Robert  |e verfasserin  |4 aut 
700 1 |a Clifton, David A  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 46(2024), 10 vom: 09. Sept., Seite 6542-6558  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:46  |g year:2024  |g number:10  |g day:09  |g month:09  |g pages:6542-6558 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2024.3378843  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 46  |j 2024  |e 10  |b 09  |c 09  |h 6542-6558