Port-Hamiltonian Approach to Neural Network Training

Cited 5 time in webofscience Cited 2 time in scopus
  • Hit : 188
  • Download : 0
Neural networks are discrete entities: subdivided into discrete layers and parametrized by weights which are iteratively optimized via difference equations. Recent work proposes networks with layer outputs which are no longer quantized but are solutions of an ordinary differential equation (ODE); however, these networks are still optimized via discrete methods (e.g. gradient descent). In this paper, we explore a different direction: namely, we propose a novel framework for learning in which the parameters themselves are solutions of ODEs. By viewing the optimization process as the evolution of a port-Hamiltonian system, we can ensure convergence to a minimum of the objective function. Numerical experiments have been performed to show the validity and effectiveness of the proposed methods.
Publisher
IEEE
Issue Date
2019-12-12
Language
English
Citation

58th IEEE Conference on Decision and Control, CDC 2019, pp.6799 - 6806

DOI
10.1109/CDC40024.2019.9030017
URI
http://hdl.handle.net/10203/280196
Appears in Collection
IE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 5 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0