摘要
Are assessment tools for machine-generated translations applicable to human translations? To address this question, the present study compares two assessments used in translation tests: the first is the error-analysis-based method applied by most schools and institutions, the other a scale-based method proposed by Liu, Chang et al. (2005). They have adapted Carroll’s scales developed for quality assessment of machine-generated translations. In the present study, twelve graders were invited to re-grade the test papers in Liu, Chang et al. (2005)’s experiment by different methods. Based on the results and graders’ feedback, a number of modifications of the measuring procedure as well as the scales were provided. The study showed that the scale method mostly used to assess machine-generated translations is also a reliable and valid tool to assess human translations. The measurement was accepted by the Ministry of Education in Taiwan and applied in the 2007 public translation proficiency test.
原文 | 英語 |
---|---|
頁(從 - 到) | 713-722 |
頁數 | 10 |
期刊 | Meta (Canada) |
卷 | 56 |
發行號 | 3 |
DOIs | |
出版狀態 | 已發佈 - 2011 |
ASJC Scopus subject areas
- 語言與語言學
- 語言和語言學