Proposing ways of evaluating automatic short-answer markers with multiraters

Che Di Lee, Tsung Hau Jen, Hsieh Hai Fu, Chun Yen Chang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

A method of evaluating automatic short answer markers (ASAM) with multiraters has been proposed. Three indexes including mean prediction bias (MPB), prediction-bias change with scores (PBCS) and PBCD have been suggested to analyze systems' performance in detail. The first is to look at the direction of bias instead of only the size of error. The second is to look at the system performance at each score instead of only overall performance. The third is to look at the relationship between the system performance and the rating deviation instead of wasting the information provided by the average scores. The fourth is to look at the performance sensitivity by regression analysis instead of only employing qualitative analysis. Moreover, the evaluation points that the first priority of improving our single-word based system is to decrease the prediction error at low and high scores. The analysis reveals that many low- and high-score responses are misclassified as middle scores.

Original languageEnglish
Pages (from-to)E73-E76
JournalBritish Journal of Educational Technology
Volume43
Issue number3
DOIs
Publication statusPublished - 2012 May

ASJC Scopus subject areas

  • Education

Fingerprint

Dive into the research topics of 'Proposing ways of evaluating automatic short-answer markers with multiraters'. Together they form a unique fingerprint.

Cite this