(An) infinite-width analysis on the Jacobian-regularised training of a neural network무한 너비 인공 신경망의 야코비안과 로버스트한 훈련

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 26
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisor양홍석-
dc.contributor.authorKim, Taeyoung-
dc.contributor.author김태영-
dc.date.accessioned2024-07-25T19:31:21Z-
dc.date.available2024-07-25T19:31:21Z-
dc.date.issued2023-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1045942&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/320710-
dc.description학위논문(석사) - 한국과학기술원 : 전산학부, 2023.8,[iii, 68 p. :]-
dc.description.abstractIn this paper, we extend the Neural Tangent Kernel analysis to incorporate the Jacobian Regularisation, which includes additional term in the training objective. This additional regulariser forces the neural network to be flat around its training data, which makes the network to be robust against the adversarial attack. We first show that the Jacobian of neural network converges to the Gaussian Process jointly with the output as the widths of neural network tends to infinity, and characterise the kernel of this GP. We then show that the training under Jacobian regularisation is described by the inner products between the gradients of outputs and Jacobians with respect to the parameters, which we named this inner product as Jacobian Neural Tangent Kernel due to its similarity of NTK in literatures. We show that the properties of NTK also hold in JNTK, that at initialisation the JNTK converges to the deterministic asymptotic counterpart, and it stays constant during training if the width of neural network is sufficiently large. We finally prove that the Jacobian regularised training dynamics coincide with kernel regression with JNTK. Using this correspondence to kernel regression, we analyse how Jacobian regularised training enhances the robustness, and when is it possible-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subject신경 기울기 커널▼a야코비안 정칙화▼a무한 너비 인공 신경망▼a심층 학습 이론-
dc.subjectNeural tangent kernel▼aJacobian regularisation▼aInfinite width neural network▼aDeep learning theory-
dc.title(An) infinite-width analysis on the Jacobian-regularised training of a neural network-
dc.title.alternative무한 너비 인공 신경망의 야코비안과 로버스트한 훈련-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전산학부,-
dc.contributor.alternativeauthorYang, Hongseok-
Appears in Collection
CS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0