TY - GEN
T1 - 上下文語言模型化技術於常見問答檢索之研究
AU - Tseng, Wen Ting
AU - Hsu, Yung Chang
AU - Chen, Berlin
N1 - Publisher Copyright:
© ROCLING 2020.All rights reserved.
PY - 2020
Y1 - 2020
N2 - Recent years have witnessed significant progress in the development of deep learning techniques, which also has achieved state-of-the-art performance for a wide variety of natural language processing (NLP) applications like the frequently asked question (FAQ) retrieval task. FAQ retrieval, which manages to provide relevant information in response to frequent questions or concerns, has far-reaching applications such as e-commerce services and online forums, among many other applications. In the common setting of the FAQ retrieval task, a collection of question-answer (Q-A) pairs compiled in advance can be capitalized to retrieve an appropriate answer in response to a user’s query that is likely to reoccur frequently. To date, there have many strategies proposed to approach FAQ retrieval, ranging from comparing the similarity between the query and a question, to scoring the relevance between the query and the associated answer of a question, and performing classification on user queries. As such, a variety of contextualized language models have been extended and developed to operationalize the aforementioned strategies, like BERT (Bidirectional Encoder Representations from Transformers), K-BERT and Sentence-BERT. Although BERT and its variants has demonstrated reasonably good results on various FAQ retrieval tasks, they still would fall short for some tasks that may resort to generic knowledge. In view of this, in this paper, we set out to explore the utility of injecting an extra knowledge base into BERT for FAQ retrieval, meanwhile comparing among synergistic effects of different strategies and methods.
AB - Recent years have witnessed significant progress in the development of deep learning techniques, which also has achieved state-of-the-art performance for a wide variety of natural language processing (NLP) applications like the frequently asked question (FAQ) retrieval task. FAQ retrieval, which manages to provide relevant information in response to frequent questions or concerns, has far-reaching applications such as e-commerce services and online forums, among many other applications. In the common setting of the FAQ retrieval task, a collection of question-answer (Q-A) pairs compiled in advance can be capitalized to retrieve an appropriate answer in response to a user’s query that is likely to reoccur frequently. To date, there have many strategies proposed to approach FAQ retrieval, ranging from comparing the similarity between the query and a question, to scoring the relevance between the query and the associated answer of a question, and performing classification on user queries. As such, a variety of contextualized language models have been extended and developed to operationalize the aforementioned strategies, like BERT (Bidirectional Encoder Representations from Transformers), K-BERT and Sentence-BERT. Although BERT and its variants has demonstrated reasonably good results on various FAQ retrieval tasks, they still would fall short for some tasks that may resort to generic knowledge. In view of this, in this paper, we set out to explore the utility of injecting an extra knowledge base into BERT for FAQ retrieval, meanwhile comparing among synergistic effects of different strategies and methods.
KW - Deep Learning
KW - Frequently Asked Question
KW - Information Retrieval
KW - Knowledge Graph
KW - Natural Language Processing
UR - http://www.scopus.com/inward/record.url?scp=85181111700&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85181111700&partnerID=8YFLogxK
M3 - 會議論文篇章
AN - SCOPUS:85181111700
T3 - ROCLING 2020 - 32nd Conference on Computational Linguistics and Speech Processing
SP - 247
EP - 259
BT - ROCLING 2020 - 32nd Conference on Computational Linguistics and Speech Processing
A2 - Wang, Jenq-Haur
A2 - Lai, Ying-Hui
A2 - Lee, Lung-Hao
A2 - Chen, Kuan-Yu
A2 - Lee, Hung-Yi
A2 - Lee, Chi-Chun
A2 - Wang, Syu-Siang
A2 - Huang, Hen-Hsen
A2 - Liu, Chuan-Ming
PB - The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
T2 - 32nd Conference on Computational Linguistics and Speech Processing, ROCLING 2020
Y2 - 24 September 2020 through 26 September 2020
ER -