TY - GEN
T1 - FAQ Retrieval using Question-Aware Graph Convolutional Network and Conualized Language Model
AU - Tseng, Wan Ting
AU - Wu, Chin Ying
AU - Hsu, Yung Chang
AU - Chen, Berlin
N1 - Publisher Copyright:
© 2021 APSIPA.
PY - 2021
Y1 - 2021
N2 - Frequently asked question (FAQ) retrieval, which seeks to provide the most relevant question, or question-answer (QA) pair, in response to a user's query, has found its applications in widespread use cases. More recently, methods based on bidirectional encoder representations from Transformers (BERT) and its variants, which typically take the word embeddings of a question in training time (or query in test time) as the input to predict relevant answers, have shown good promise for FAQ retrieval. However, these BERT-based methods do not pay enough attention to the global information specifically about an FAQ task. To cater for this, we in this paper put forward a question-aware graph convolutional network (QGCN) to induce vector embeddings of vocabulary words, thereby encapsulating the global question-question, question-word and word-word relations which can be used to augment the embeddings derived from BERT for better F AQ retrieval. Meanwhile, we also investigate leverage domain-specific knowledge graphs to enrich the question and query embeddings (denoted by K-BERT). Finally, we conduct extensive experiments to evaluate the utility of the proposed approaches on two publicly-available FAQ datasets (viz. TaipeiQA and StackF AQ), where the associated results confirm the promising efficacy of the proposed approach in comparison to some top-of-the-line methods.
AB - Frequently asked question (FAQ) retrieval, which seeks to provide the most relevant question, or question-answer (QA) pair, in response to a user's query, has found its applications in widespread use cases. More recently, methods based on bidirectional encoder representations from Transformers (BERT) and its variants, which typically take the word embeddings of a question in training time (or query in test time) as the input to predict relevant answers, have shown good promise for FAQ retrieval. However, these BERT-based methods do not pay enough attention to the global information specifically about an FAQ task. To cater for this, we in this paper put forward a question-aware graph convolutional network (QGCN) to induce vector embeddings of vocabulary words, thereby encapsulating the global question-question, question-word and word-word relations which can be used to augment the embeddings derived from BERT for better F AQ retrieval. Meanwhile, we also investigate leverage domain-specific knowledge graphs to enrich the question and query embeddings (denoted by K-BERT). Finally, we conduct extensive experiments to evaluate the utility of the proposed approaches on two publicly-available FAQ datasets (viz. TaipeiQA and StackF AQ), where the associated results confirm the promising efficacy of the proposed approach in comparison to some top-of-the-line methods.
KW - Frequently Asked Question
KW - Graph Convolutional Networks
KW - knowledge graph
KW - language model
UR - http://www.scopus.com/inward/record.url?scp=85126653247&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85126653247&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85126653247
T3 - 2021 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2021 - Proceedings
SP - 2006
EP - 2012
BT - 2021 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2021 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2021 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2021
Y2 - 14 December 2021 through 17 December 2021
ER -