TY - JOUR
T1 - Comparing grammar feedback provided by teachers with an automated writing evaluation system
AU - Howard Chen, Hao Jan
AU - Sarah Cheng, Huei Wen
AU - Chirstine Yang, Ting Yu
PY - 2017/12/1
Y1 - 2017/12/1
N2 - Writing has always been the focus in Taiwanese high school English classrooms. However, with often more than 40 students in a class, grading and giving feedback on students' writing can be a burden. Automated writing evaluation (AWE) systems have recently emerged that provide learners with computerized feedback and serve as an alternative way to help teachers correct essays. This study aims to investigate the employment of a newly-developed AWE system, Correct English, in correcting students' writing and uncover whether the system can provide feedback which can be used by learners to revise their texts. Ninety 12th-graders were recruited, each producing two compositions on assigned topics for the system and two human raters to correct. The computer-generated feedback was compared with that of the human raters. The results showed that one third of the feedback messages provided by Correct English were false alarms, and many common errors were left untreated. Additionally, human raters can provide revisions or suggestions on unclear or ungrammatical sentences that is beyond the capabilities of Correct English. Although Correct English can help with correcting basic errors, there remains much room for improvement. Language learners should therefore use its feedback with caution and seek teachers' help when needed.
AB - Writing has always been the focus in Taiwanese high school English classrooms. However, with often more than 40 students in a class, grading and giving feedback on students' writing can be a burden. Automated writing evaluation (AWE) systems have recently emerged that provide learners with computerized feedback and serve as an alternative way to help teachers correct essays. This study aims to investigate the employment of a newly-developed AWE system, Correct English, in correcting students' writing and uncover whether the system can provide feedback which can be used by learners to revise their texts. Ninety 12th-graders were recruited, each producing two compositions on assigned topics for the system and two human raters to correct. The computer-generated feedback was compared with that of the human raters. The results showed that one third of the feedback messages provided by Correct English were false alarms, and many common errors were left untreated. Additionally, human raters can provide revisions or suggestions on unclear or ungrammatical sentences that is beyond the capabilities of Correct English. Although Correct English can help with correcting basic errors, there remains much room for improvement. Language learners should therefore use its feedback with caution and seek teachers' help when needed.
KW - Automated writing evaluation
KW - Grammatical errors
KW - Teacher feedback
UR - http://www.scopus.com/inward/record.url?scp=85041591736&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85041591736&partnerID=8YFLogxK
U2 - 10.6330/ETL.2017.41.4.04
DO - 10.6330/ETL.2017.41.4.04
M3 - Article
AN - SCOPUS:85041591736
SN - 1023-7267
VL - 41
SP - 99
EP - 131
JO - English Teaching and Learning
JF - English Teaching and Learning
IS - 4
ER -