Comparing grammar feedback provided by teachers with an automated writing evaluation system

Hao Jan Howard Chen, Huei Wen Sarah Cheng, Ting Yu Chirstine Yang

Research output: Contribution to journalArticle

Abstract

Writing has always been the focus in Taiwanese high school English classrooms. However, with often more than 40 students in a class, grading and giving feedback on students' writing can be a burden. Automated writing evaluation (AWE) systems have recently emerged that provide learners with computerized feedback and serve as an alternative way to help teachers correct essays. This study aims to investigate the employment of a newly-developed AWE system, Correct English, in correcting students' writing and uncover whether the system can provide feedback which can be used by learners to revise their texts. Ninety 12th-graders were recruited, each producing two compositions on assigned topics for the system and two human raters to correct. The computer-generated feedback was compared with that of the human raters. The results showed that one third of the feedback messages provided by Correct English were false alarms, and many common errors were left untreated. Additionally, human raters can provide revisions or suggestions on unclear or ungrammatical sentences that is beyond the capabilities of Correct English. Although Correct English can help with correcting basic errors, there remains much room for improvement. Language learners should therefore use its feedback with caution and seek teachers' help when needed.

Original languageEnglish
Pages (from-to)99-131
Number of pages33
JournalEnglish Teaching and Learning
Volume41
Issue number4
DOIs
Publication statusPublished - 2017 Dec 1
Externally publishedYes

    Fingerprint

Keywords

  • Automated writing evaluation
  • Grammatical errors
  • Teacher feedback

ASJC Scopus subject areas

  • Education
  • Linguistics and Language

Cite this