Reliability and Validity of a Scale-based Assessment for Translation Tests

Tzu Yun Lai*

*Corresponding author for this work

Research output: Contribution to journalReview articlepeer-review

8 Citations (Scopus)


Are assessment tools for machine-generated translations applicable to human translations? To address this question, the present study compares two assessments used in translation tests: the first is the error-analysis-based method applied by most schools and institutions, the other a scale-based method proposed by Liu, Chang et al. (2005). They have adapted Carroll’s scales developed for quality assessment of machine-generated translations. In the present study, twelve graders were invited to re-grade the test papers in Liu, Chang et al. (2005)’s experiment by different methods. Based on the results and graders’ feedback, a number of modifications of the measuring procedure as well as the scales were provided. The study showed that the scale method mostly used to assess machine-generated translations is also a reliable and valid tool to assess human translations. The measurement was accepted by the Ministry of Education in Taiwan and applied in the 2007 public translation proficiency test.

Original languageEnglish
Pages (from-to)713-722
Number of pages10
JournalMeta (Canada)
Issue number3
Publication statusPublished - 2011


  • assessment scales
  • error-analysis
  • evaluation criteria
  • translation evaluation
  • translation test

ASJC Scopus subject areas

  • Language and Linguistics
  • Linguistics and Language


Dive into the research topics of 'Reliability and Validity of a Scale-based Assessment for Translation Tests'. Together they form a unique fingerprint.

Cite this