Edge-Labeling Graph Neural Network for Few-shot Learning

Cited 329 time in webofscience Cited 235 time in scopus
  • Hit : 256
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKim, Jongminko
dc.contributor.authorKim, Saesupko
dc.contributor.authorKim, Sungwoongko
dc.contributor.authorYoo, Chang-Dongko
dc.date.accessioned2019-11-28T07:20:10Z-
dc.date.available2019-11-28T07:20:10Z-
dc.date.created2019-11-27-
dc.date.created2019-11-27-
dc.date.created2019-11-27-
dc.date.created2019-11-27-
dc.date.created2019-11-27-
dc.date.issued2019-06-18-
dc.identifier.citationIEEE Conference on Computer Vision and Pattern Recognition, pp.11 - 20-
dc.identifier.issn1063-6919-
dc.identifier.urihttp://hdl.handle.net/10203/268687-
dc.description.abstractIn this paper, we propose a novel edge-labeling graph neural network (EGNN), which adapts a deep neural network on the edge-labeling graph, for few-shot learning. The previous graph neural network (GNN) approaches in few-shot learning have been based on the node-labeling framework, which implicitly models the intra-cluster similarity and the inter-cluster dissimilarity. In contrast, the proposed EGNN learns to predict the edge-labels rather than the node-labels on the graph that enables the evolution of an explicit clustering by iteratively updating the edge-labels with direct exploitation of both intra-cluster similarity and the inter-cluster dissimilarity. It is also well suited for performing on various numbers of classes without retraining, and can be easily extended to perform a transductive inference. The parameters of the EGNN are learned by episodic training with an edge-labeling loss to obtain a well-generalizable model for unseen low-data problem. On both of the supervised and semi-supervised few-shot image classification tasks with two benchmark datasets, the proposed EGNN significantly improves the performances over the existing GNNs.-
dc.languageEnglish-
dc.publisherComputer Vision Foundation / IEEE Computer Society-
dc.titleEdge-Labeling Graph Neural Network for Few-shot Learning-
dc.typeConference-
dc.identifier.wosid000529484000002-
dc.identifier.scopusid2-s2.0-85078772649-
dc.type.rimsCONF-
dc.citation.beginningpage11-
dc.citation.endingpage20-
dc.citation.publicationnameIEEE Conference on Computer Vision and Pattern Recognition-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationLong Beach, CA-
dc.identifier.doi10.1109/CVPR.2019.00010-
dc.contributor.localauthorYoo, Chang-Dong-
dc.contributor.nonIdAuthorKim, Saesup-
dc.contributor.nonIdAuthorKim, Sungwoong-
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 329 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0