Abstract
Are assessment tools for machine-generated translations applicable to human translations? To address this question, the present study compares two assessments used in translation tests: the first is the error-analysis-based method applied by most schools and institutions, the other a scale-based method proposed by Liu, Chang et al. (2005). They have adapted Carroll’s scales developed for quality assessment of machine-generated translations. In the present study, twelve graders were invited to re-grade the test papers in Liu, Chang et al. (2005)’s experiment by different methods. Based on the results and graders’ feedback, a number of modifications of the measuring procedure as well as the scales were provided. The study showed that the scale method mostly used to assess machine-generated translations is also a reliable and valid tool to assess human translations. The measurement was accepted by the Ministry of Education in Taiwan and applied in the 2007 public translation proficiency test.
Original language | English |
---|---|
Pages (from-to) | 713-722 |
Number of pages | 10 |
Journal | Meta (Canada) |
Volume | 56 |
Issue number | 3 |
DOIs | |
Publication status | Published - 2011 |
Keywords
- assessment scales
- error-analysis
- evaluation criteria
- translation evaluation
- translation test
ASJC Scopus subject areas
- Language and Linguistics
- Linguistics and Language