Theoretical study on leveraging privacy of pretrained large languge model by direct model editing직접 모델 수정을 통한 사전 학습된 거대 언어 모델 내부 개인정보의 삭제 방안

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 4
  • Download : 0
Large Language Models (LLMs) store knowledge learned from vast amounts of text data. With the recent trend toward models possessing more parameters and being trained on larger datasets, the likelihood of LLMs unintentionally learning personal information has increased. In response, various studies propose methodologies to prevent LLMs from generating outputs that include personal information. Despite these recent efforts, there is a growing need for approaches to directly delete pre-learned information within the model, as attack techniques aligned with privacy defense strategies continue to advance. Most prior research on information deletion involves fine-tuning, where specific facts are repeatedly trained as irrelevant information to prevent the model from producing outputs containing personal information. However, this approach is challenging to adapt to user requests for personal data deletion and consumes substantial computing resources. This study presents an effective method for deleting personal information within large language models. First, an analysis is conducted on how well personal information within LLMs activates the transformer neural network for output generation. Additionally, the impact of the number of training iterations on the activation level of the transformer network is examined, exploring the potential for more precise updates to the parameters of large language models. Finally, the study confirms the effectiveness of using low-frequency fine-tuning for information deletion compared to traditional fine-tuning approaches. The proposed methodology can be actively applied in services that require agile responses to numerous requests for small-scale personal data deletion, even in scenarios with limited computing resources. All code and data related to the methods and experiments in this white paper will be made publicly available.
Advisors
차미영researcherCha, Meeyoungresearcher김란우researcher
Description
한국과학기술원 :전산학부,
Publisher
한국과학기술원
Issue Date
2024
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전산학부, 2024.2,[iv, 27 p. :]

Keywords

자연 언어 처리▼a거대 언어 모델▼a모델 학습 해제▼a트랜스포머 모델; Natural language processing▼aLarge language model▼aModel unlearning▼aTransformer Model

URI
http://hdl.handle.net/10203/321670
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1097250&flag=dissertation
Appears in Collection
CS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0