Tweaking Deep Neural Networks

Cited 2 time in webofscience Cited 0 time in scopus
  • Hit : 182
  • Download : 0
Deep neural networks are trained so as to achieve a kind of the maximum overall accuracy through a learning process using given training data. Therefore, it is difficult to fix them to improve the accuracies of specific problematic classes or classes of interest that may be valuable to some users or applications. To address this issue, we propose the synaptic join method to tweak neural networks by adding certain additional synapses from the intermediate hidden layers to the output layer across layers and additionally training only these synapses, if necessary. To select the most effective synapses, the synaptic join method evaluates the performance of all the possible candidate synapses between the hidden neurons and output neurons based on the distribution of all the possible proper weights. The experimental results show that the proposed method can effectively improve the accuracies of specific classes in a controllable way.
Publisher
IEEE COMPUTER SOC
Issue Date
2022-09
Language
English
Article Type
Article
Citation

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, v.44, no.9, pp.5715 - 5728

ISSN
0162-8828
DOI
10.1109/TPAMI.2021.3079511
URI
http://hdl.handle.net/10203/298094
Appears in Collection
CS-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 2 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0