In the evaluation of reading ability, self-evaluation has been used for a long time to measure the reflective ability of readers. The fundamental reason for this trend is that the past literature does not have a precise explanation and definition of reading reflection. In view of this, the goal of this research is to first define the structure of the reflection process of reading; on this basis, with the assistance of automated technology, we propose methods for effective evaluation of reflection performance after reading to replace the inaccuracy of introspective assessment methods in the past. This research combines reading psychology, computational linguistics, machine learning and natural language processing to conduct cross-domain research, and tries to develop an automated scoring model for Chinese reading reflection to evaluate 566 reflections of three texts with text difficulties that extend across low, middle, and high levels of elementary school texts. The accuracy of the three models thus developed is as follows. Compared with the rating results by experts, the refined scoring model achieved the accuracy of 54.42%, knowledge integration scoring model: 52.65%, and comprehensive scoring model: 43.46%. The adjacent accuracy of the three models are 81.80%, 92.58% and 83.39%, respectively. The results show that the scoring models developed by this research are similar to the trend of expert scoring, and have teaching aid functions.
|Effective start/end date||2018/08/01 → 2021/07/31|
- Reading reflection
- automatic reading reflection scoring model
- integration of old and new knowledge
- machine learning
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.