TY - JOUR
T1 - Automatic assessment of students' free-text answers with different levels
AU - Hou, Wen Juan
AU - Tsao, Jia Hao
N1 - Funding Information:
Research of this paper was partially supported by National Science Council, under the contracts NSC 98-2221-E-003-021 and NSC 99-2631-S-003-002.
PY - 2011/4
Y1 - 2011/4
N2 - For improving the interaction between students and teachers, it is fundamental for teachers to understand students' learning levels. An intelligent computer system should be able to automatically evaluate students' answers when the teacher asks some questions. We first built the assessment corpus from the course in the university. With the corpus, we applied the following procedures to extract the relevant information and then built the feature model: (1) remove the punctuation and decimal numbers because it plays the noise roles, (2) apply the part-of-speech tagging such that the syntactic information is extracted, (3) for grouping the information, take the stemming and normalization procedure to sentences, and (4) extract other features. In this study, we treated the assessment problem as the classifying problem, and tried two kinds of classification strategies: two and three classifying classes. For two classes, we got an average of 66.3% precision rate at first. When adding n-gram concept to the feature model, the system reached to the average of 71.9% precision rate which increased performance by 5.6%. The same tendency emerged for three-class experiment. The experiments with SVM show exhilarating results and some improving efforts will be further made in the future.
AB - For improving the interaction between students and teachers, it is fundamental for teachers to understand students' learning levels. An intelligent computer system should be able to automatically evaluate students' answers when the teacher asks some questions. We first built the assessment corpus from the course in the university. With the corpus, we applied the following procedures to extract the relevant information and then built the feature model: (1) remove the punctuation and decimal numbers because it plays the noise roles, (2) apply the part-of-speech tagging such that the syntactic information is extracted, (3) for grouping the information, take the stemming and normalization procedure to sentences, and (4) extract other features. In this study, we treated the assessment problem as the classifying problem, and tried two kinds of classification strategies: two and three classifying classes. For two classes, we got an average of 66.3% precision rate at first. When adding n-gram concept to the feature model, the system reached to the average of 71.9% precision rate which increased performance by 5.6%. The same tendency emerged for three-class experiment. The experiments with SVM show exhilarating results and some improving efforts will be further made in the future.
KW - Free-text assessment
KW - natural language processing
KW - support vector machine
UR - http://www.scopus.com/inward/record.url?scp=79955388722&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=79955388722&partnerID=8YFLogxK
U2 - 10.1142/S0218213011000188
DO - 10.1142/S0218213011000188
M3 - Article
AN - SCOPUS:79955388722
SN - 0218-2130
VL - 20
SP - 327
EP - 347
JO - International Journal on Artificial Intelligence Tools
JF - International Journal on Artificial Intelligence Tools
IS - 2
ER -