Theoretical study on leveraging privacy of pretrained large languge model by direct model editing직접 모델 수정을 통한 사전 학습된 거대 언어 모델 내부 개인정보의 삭제 방안

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 5
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisor차미영-
dc.contributor.advisorCha, Meeyoung-
dc.contributor.advisor김란우-
dc.contributor.authorMyung, Jaehyeon-
dc.contributor.author명재현-
dc.date.accessioned2024-07-30T19:31:43Z-
dc.date.available2024-07-30T19:31:43Z-
dc.date.issued2024-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1097250&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/321670-
dc.description학위논문(석사) - 한국과학기술원 : 전산학부, 2024.2,[iv, 27 p. :]-
dc.description.abstractLarge Language Models (LLMs) store knowledge learned from vast amounts of text data. With the recent trend toward models possessing more parameters and being trained on larger datasets, the likelihood of LLMs unintentionally learning personal information has increased. In response, various studies propose methodologies to prevent LLMs from generating outputs that include personal information. Despite these recent efforts, there is a growing need for approaches to directly delete pre-learned information within the model, as attack techniques aligned with privacy defense strategies continue to advance. Most prior research on information deletion involves fine-tuning, where specific facts are repeatedly trained as irrelevant information to prevent the model from producing outputs containing personal information. However, this approach is challenging to adapt to user requests for personal data deletion and consumes substantial computing resources. This study presents an effective method for deleting personal information within large language models. First, an analysis is conducted on how well personal information within LLMs activates the transformer neural network for output generation. Additionally, the impact of the number of training iterations on the activation level of the transformer network is examined, exploring the potential for more precise updates to the parameters of large language models. Finally, the study confirms the effectiveness of using low-frequency fine-tuning for information deletion compared to traditional fine-tuning approaches. The proposed methodology can be actively applied in services that require agile responses to numerous requests for small-scale personal data deletion, even in scenarios with limited computing resources. All code and data related to the methods and experiments in this white paper will be made publicly available.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subject자연 언어 처리▼a거대 언어 모델▼a모델 학습 해제▼a트랜스포머 모델-
dc.subjectNatural language processing▼aLarge language model▼aModel unlearning▼aTransformer Model-
dc.titleTheoretical study on leveraging privacy of pretrained large languge model by direct model editing-
dc.title.alternative직접 모델 수정을 통한 사전 학습된 거대 언어 모델 내부 개인정보의 삭제 방안-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전산학부,-
dc.contributor.alternativeauthorKim, Lanu-
Appears in Collection
CS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0