Twice fine-tuning deep neural networks for paraphrase identification

Cited 4 time in webofscience Cited 4 time in scopus
  • Hit : 645
  • Download : 0
In this Letter, the authors introduce a novel approach to learn representations for sentence-level paraphrase identification (PI) using BERT and ten natural language processing tasks. Their method trains an unsupervised model called BERT with two different tasks to detect whether two sentences are in paraphrase relation or not. Unlike conventional BERT, which fine tunes the target task such as PI to pre-trained BERT, twice fine-tuning deep neural networks first fine tune each task (e.g. general language understanding evaluation tasks, question answering, and paraphrase adversaries from word scrambling task) and second fine tune target PI task. As a result, the multi-fine-tuned BERT model outperformed the fine-tuned model only with Microsoft Research Paraphrase Corpus (MRPC), which is paraphrase data, except for one case of Stanford Sentiment Treebank - 2 (SST-2). Multi-task fine-tuning is a simple idea but experimentally powerful. Experiments show that fine-tuning just PI tasks to the BERT already gives enough performance, but additionally, fine-tuning similar tasks can affect performance (3.4% point absolute improvement) and be competitive with the state-of-the-art systems.
Publisher
INST ENGINEERING TECHNOLOGY-IET
Issue Date
2020-04
Language
English
Article Type
Article
Citation

ELECTRONICS LETTERS, v.56, no.9, pp.444 - 446

ISSN
0013-5194
DOI
10.1049/el.2019.4183
URI
http://hdl.handle.net/10203/274323
Appears in Collection
CS-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 4 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0