TY - JOUR
T1 - Development and Evaluation of Emotional Conversation System Based on Automated Text Generation
AU - Yang, Te Lun
AU - Tseng, Yuen Hsien
N1 - Publisher Copyright:
© 2020, Journal of Educational Media & Library Sciences. All Rights Reserved
PY - 2020
Y1 - 2020
N2 - Based on the corpus provided by the 2019 Chinese Emotional Conversation Generation (CECG) evaluation task, an emotional conversation system is implemented in this paper using deep learning and other technologies such as GPT-2 and BERT. The effectiveness of the system is evaluated based on the test data and criteria provided by CECG. The results based on three human annotators show that the system has a similar effectiveness level with that of the best team participating in the 2019 CECG task. Further case studies reveal that the more post/reply pairs about a topic in the training data, the better the language model of GPT-2 to generate innovative, interesting, and perfect response sentences for that topic. The main contributions of this study are: 1. Integrating emotion into the post string as a condition for computing probability, so as to simply train GPT-2 and make GPT-2 predict in the original way; 2. Applying BERT to predict the coherence of response sentences as a basis for ranking. Although these two techniques are derived from the training mechanisms of GPT and BERT respectively, we have slightly modified them to fit the task of CECG and achieved good results.
AB - Based on the corpus provided by the 2019 Chinese Emotional Conversation Generation (CECG) evaluation task, an emotional conversation system is implemented in this paper using deep learning and other technologies such as GPT-2 and BERT. The effectiveness of the system is evaluated based on the test data and criteria provided by CECG. The results based on three human annotators show that the system has a similar effectiveness level with that of the best team participating in the 2019 CECG task. Further case studies reveal that the more post/reply pairs about a topic in the training data, the better the language model of GPT-2 to generate innovative, interesting, and perfect response sentences for that topic. The main contributions of this study are: 1. Integrating emotion into the post string as a condition for computing probability, so as to simply train GPT-2 and make GPT-2 predict in the original way; 2. Applying BERT to predict the coherence of response sentences as a basis for ranking. Although these two techniques are derived from the training mechanisms of GPT and BERT respectively, we have slightly modified them to fit the task of CECG and achieved good results.
KW - Artificial intelligence
KW - Conversational system
KW - Deep learning
KW - Text generation
KW - Text understanding
UR - http://www.scopus.com/inward/record.url?scp=85101753136&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85101753136&partnerID=8YFLogxK
U2 - 10.6120/JoEMLS.202011_57(3).0048.RS.CM
DO - 10.6120/JoEMLS.202011_57(3).0048.RS.CM
M3 - Article
AN - SCOPUS:85101753136
SN - 1013-090X
VL - 57
SP - 355
EP - 378
JO - Journal of Educational Media and Library Sciences
JF - Journal of Educational Media and Library Sciences
IS - 3
ER -