DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lim, Heechul | ko |
dc.contributor.author | Chon, Kang-Wook | ko |
dc.contributor.author | Kim, Min-Soo | ko |
dc.date.accessioned | 2023-05-13T05:02:45Z | - |
dc.date.available | 2023-05-13T05:02:45Z | - |
dc.date.created | 2023-05-12 | - |
dc.date.created | 2023-05-12 | - |
dc.date.created | 2023-05-12 | - |
dc.date.issued | 2023-03 | - |
dc.identifier.citation | IEEE ACCESS, v.11, pp.34297 - 34308 | - |
dc.identifier.issn | 2169-3536 | - |
dc.identifier.uri | http://hdl.handle.net/10203/306816 | - |
dc.description.abstract | Despite recent advances in deep neural networks (DNNs), multi-task learning has not been able to utilize DNNs thoroughly. The current method of DNN design for a single task requires considerable skill in deciding many architecture parameters a priori before training begins. However, extending it to multi-task learning makes it more challenging. Inspired by findings from neuroscience, we propose a unified DNN modeling framework called ConnectomeNet that encompasses the best principles of contemporary DNN designs and unifies them with transfer, curriculum, and adaptive structural learning, all in the context of multi-task learning. Specifically, ConnectomeNet iteratively resembles connectome neuron units with a high-level topology represented as a general-directed acyclic graph. As a result, ConnectomeNet enables non-trivial automatic sharing of neurons across multiple tasks and learns to adapt its topology economically for a new task. Extensive experiments, including an ablation study, show that ConnectomeNet outperforms the state-of-the-art methods in multi-task learning such as the degree of catastrophic forgetting from sequential learning. For the degree of catastrophic forgetting, with normalized accuracy, our proposed method (which becomes 100%) overcomes mean-IMM (89.0%) and DEN (99.97%). | - |
dc.language | English | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.title | ConnectomeNet: A Unified Deep Neural Network Modeling Framework for Multi-Task Learning | - |
dc.type | Article | - |
dc.identifier.wosid | 000970684700001 | - |
dc.identifier.scopusid | 2-s2.0-85151545048 | - |
dc.type.rims | ART | - |
dc.citation.volume | 11 | - |
dc.citation.beginningpage | 34297 | - |
dc.citation.endingpage | 34308 | - |
dc.citation.publicationname | IEEE ACCESS | - |
dc.identifier.doi | 10.1109/ACCESS.2023.3258975 | - |
dc.contributor.localauthor | Kim, Min-Soo | - |
dc.contributor.nonIdAuthor | Lim, Heechul | - |
dc.contributor.nonIdAuthor | Chon, Kang-Wook | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | Adaptive learning | - |
dc.subject.keywordAuthor | dynamic network expansion | - |
dc.subject.keywordAuthor | multi-task learning | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.