Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 366
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorYoo, Kang Minko
dc.contributor.authorKim, Junyeobko
dc.contributor.authorKim, Hyuhng Joonko
dc.contributor.authorCho, Hyunsooko
dc.contributor.authorJo, Hwiyeolko
dc.contributor.authorLee, Sang-Wooko
dc.contributor.authorLee, Sang-Gooko
dc.contributor.authorKim, Taeukko
dc.date.accessioned2023-09-15T01:00:31Z-
dc.date.available2023-09-15T01:00:31Z-
dc.date.created2023-09-15-
dc.date.issued2022-12-
dc.identifier.citation2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, pp.2422 - 2437-
dc.identifier.urihttp://hdl.handle.net/10203/312654-
dc.description.abstractDespite recent explosion of interests in in-context learning, the underlying mechanism and the precise impact of the quality of demonstrations remain elusive. Intuitively, ground-truth labels should have as much impact in in-context learning (ICL) as supervised learning, but recent work reported that the input-label correspondence is significantly less important than previously thought. Intrigued by this counter-intuitive observation, we re-examine the importance of ground-truth labels in in-context learning. With the introduction of two novel metrics, namely Label-Correctness Sensitivity and Ground-truth Label Effect Ratio (GLER), we were able to conduct quantifiable analysis on the impact of ground-truth label demonstrations. Through extensive analyses, we find that the correct input-label mappings can have varying impacts on the downstream in-context learning performances, depending on the experimental configuration. Through additional studies, we identify key components, such as the verbosity of prompt templates and the language model size, as the controlling factor to achieve more noise-resilient ICL.-
dc.languageEnglish-
dc.publisherAssociation for Computational Linguistics (ACL)-
dc.titleGround-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations-
dc.typeConference-
dc.identifier.scopusid2-s2.0-85144924488-
dc.type.rimsCONF-
dc.citation.beginningpage2422-
dc.citation.endingpage2437-
dc.citation.publicationname2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022-
dc.identifier.conferencecountryAR-
dc.identifier.conferencelocationAbu Dhabi-
dc.contributor.localauthorLee, Sang-Woo-
dc.contributor.nonIdAuthorYoo, Kang Min-
dc.contributor.nonIdAuthorKim, Junyeob-
dc.contributor.nonIdAuthorKim, Hyuhng Joon-
dc.contributor.nonIdAuthorCho, Hyunsoo-
dc.contributor.nonIdAuthorJo, Hwiyeol-
dc.contributor.nonIdAuthorLee, Sang-Goo-
dc.contributor.nonIdAuthorKim, Taeuk-
Appears in Collection
RIMS Conference Papers
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0