Abstract
The well-known Automated Chinese Essay Scoring (ACES) system can accurately score essays written by native learners. However, current automated essay scoring (AES) systems do not provide reliable results when grading essays written by Chinese as a Foreign Language (CFL) learners. Due to the effects of language transfer, the grammatical errors made by CFL learners are more diverse and complex than those made by native learners, and it is common for multiple errors to occur simultaneously in CFL learners’ essays. Furthermore, it is difficult to collect samples of CFL learner essays, particularly those that correspond to the lowest and highest grades, which has made it difficult to train AES models for CFL learners. To address these problems, this chapter introduces the “SmartWriting-Mandarin” (SW-M) AES system, which was specifically designed to evaluate Chinese essays written by CFL learners. This system is comprised of three modules: preprocessing, feature extraction, and scoring modules. In this chapter, the underlying concepts of their designs and reasons for choosing their respective techniques are explained. Finally, the results of testing SW-M on the Chinese written corpus will be described, as will the future possibilities of the SW-M system.
Original language | English |
---|---|
Title of host publication | The Routledge International Handbook of Automated Essay Evaluation |
Publisher | Taylor and Francis |
Pages | 78-90 |
Number of pages | 13 |
ISBN (Electronic) | 9781040033241 |
ISBN (Print) | 9781032502564 |
DOIs | |
Publication status | Published - 2024 Jan 1 |
ASJC Scopus subject areas
- General Arts and Humanities
- General Social Sciences
- General Psychology
- General Computer Science