Tweaking Deep Neural Networks
Deep neural networks are trained so as to achieve a kind of the maximum overall accuracy through a learning process using given training data. Therefore, it is difficult to fix them to improve the accuracies of specific problematic classes or classes of interest that may be valuable to some users or...
Veröffentlicht in: | IEEE transactions on pattern analysis and machine intelligence. - 1979. - 44(2022), 9 vom: 01. Sept., Seite 5715-5728 |
---|---|
1. Verfasser: | |
Weitere Verfasser: | , |
Format: | Online-Aufsatz |
Sprache: | English |
Veröffentlicht: |
2022
|
Zugriff auf das übergeordnete Werk: | IEEE transactions on pattern analysis and machine intelligence |
Schlagworte: | Journal Article Research Support, Non-U.S. Gov't |
Zusammenfassung: | Deep neural networks are trained so as to achieve a kind of the maximum overall accuracy through a learning process using given training data. Therefore, it is difficult to fix them to improve the accuracies of specific problematic classes or classes of interest that may be valuable to some users or applications. To address this issue, we propose the synaptic join method to tweak neural networks by adding certain additional synapses from the intermediate hidden layers to the output layer across layers and additionally training only these synapses, if necessary. To select the most effective synapses, the synaptic join method evaluates the performance of all the possible candidate synapses between the hidden neurons and output neurons based on the distribution of all the possible proper weights. The experimental results show that the proposed method can effectively improve the accuracies of specific classes in a controllable way |
---|---|
Beschreibung: | Date Completed 08.08.2022 Date Revised 14.09.2022 published: Print-Electronic Citation Status MEDLINE |
ISSN: | 1939-3539 |
DOI: | 10.1109/TPAMI.2021.3079511 |