Gradient Ascent Post-training Enhances Language Model Generalization

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 32
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorYoon, Dongkeunko
dc.contributor.authorJang, Joelko
dc.contributor.authorKim, Sungdongko
dc.contributor.authorSeo, Minjoonko
dc.date.accessioned2023-12-12T09:00:59Z-
dc.date.available2023-12-12T09:00:59Z-
dc.date.created2023-12-09-
dc.date.created2023-12-09-
dc.date.issued2023-07-
dc.identifier.citationACL 2023, pp.851 - 864-
dc.identifier.urihttp://hdl.handle.net/10203/316299-
dc.description.abstractIn this work, we empirically show that updating pretrained LMs (350M, 1.3B, 2.7B) with just a few steps of Gradient Ascent Post-training (GAP) on random, unlabeled text corpora enhances its zero-shot generalization capabilities across diverse NLP tasks. Specifically, we show that GAP can allow LMs to become comparable to 2-3x times larger LMs across 12 different NLP tasks. We also show that applying GAP on out-of-distribution corpora leads to the most reliable performance improvements. Our findings indicate that GAP can be a promising method for improving the generalization capability of LMs without any task-specific fine-tuning-
dc.languageEnglish-
dc.publisherAssociation for Computational Linguistics (ACL)-
dc.titleGradient Ascent Post-training Enhances Language Model Generalization-
dc.typeConference-
dc.identifier.scopusid2-s2.0-85172257505-
dc.type.rimsCONF-
dc.citation.beginningpage851-
dc.citation.endingpage864-
dc.citation.publicationnameACL 2023-
dc.identifier.conferencecountryCN-
dc.identifier.conferencelocationToronto-
dc.contributor.localauthorSeo, Minjoon-
dc.contributor.nonIdAuthorYoon, Dongkeun-
Appears in Collection
AI-Conference Papers(학술대회논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0