Hyperedge prediction without assumptions on negative examples네거티브 예시에 대한 가정이 없는 일반적인 하이퍼엣지 예측

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 3
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisor신기정-
dc.contributor.authorHwang, Hyunjin-
dc.contributor.author황현진-
dc.date.accessioned2024-07-25T19:30:25Z-
dc.date.available2024-07-25T19:30:25Z-
dc.date.issued2022-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1044983&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/320438-
dc.description학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2022.2,[iii, 19 p. :]-
dc.description.abstractHypergraphs can express more higher-order relations than graphs by allowing multiple nodes in a single hyperedge. In the real world, hypergraphs can easily represent complicated relations such as coauthorship, co-purchase, chemical reactions, etc. In these domains, predicting new hyperedges in a hypergraph is an essential task. It can be directly applied in item recommendations, proposing a probable set of chemicals, etc. However, hyperedge prediction is also challenging. It suffers from handling a massive amount of hyperedge sample space, which is exponential to the number of nodes. In practice, it is nearly impossible to check all the possible hyperedges to find the most probable ones. Existing methods predefine a candidate set of hyperedges, which is composed by real hyperedges (i.e. positive samples) and fake hyperedges (i.e. negative samples) sampled from the sample space. For the sampling procedure, a heuristic rule is defined under their assumptions on negative samples. Nevertheless, we found out that negative samples used in training affect the model’s capability and the performance also varies a lot depending on how we sample the negative samples in the test set. We propose an adversarial training method that doesn’t require any assumptions on the negative samples during training. Our method can be applied to any recent neural network models for hypergraphs. Additionally, we added a memory bank to our model to stabilize training. We empirically showed that our training method performs the best on average of three different test sets, each generated from a different negative sampling method. Throughout this paper, we experiment with the effect of the generator and memory bank and analyze the output generated from the generator.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subject딥러닝▼a그래프 마이닝▼a예측▼a적대적 학습-
dc.subjectDeep learning▼aGraph mining▼aPrediction▼aAdversarial training-
dc.titleHyperedge prediction without assumptions on negative examples-
dc.title.alternative네거티브 예시에 대한 가정이 없는 일반적인 하이퍼엣지 예측-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전기및전자공학부,-
dc.contributor.alternativeauthorShin, Kijung-
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0