Abstract
Some large-scale testing requires examinees to select and answer a fixed number of items from given items (e.g., select one out of the three items). Usually, they are constructed-response items that are marked by human raters. In this examinee-selected item (ESI) design, some examinees may benefit more than others from choosing easier items to answer, and so the missing data induced by the design become missing not at random (MNAR). Although item response theory (IRT) models have recently been developed to account for MNAR data in the ESI design, they do not consider the rater effect; thus, their utility is seriously restricted. In this study, two methods are developed: the first one is a new IRT model to account for both MNAR data and rater severity simultaneously, and the second one adapts conditional maximum likelihood estimation and pairwise estimation methods to the ESI design with the rater effect. A series of simulations was then conducted to compare their performance with those of conventional IRT models that ignored MNAR data or rater severity. The results indicated a good parameter recovery for the new model. The conditional maximum likelihood estimation and pairwise estimation methods were applicable when the Rasch models fit the data, but the conventional IRT models yielded biased parameter estimates. An empirical example was given to illustrate these new initiatives.
Original language | English |
---|---|
Pages (from-to) | 435-448 |
Number of pages | 14 |
Journal | Applied Psychological Measurement |
Volume | 43 |
Issue number | 6 |
DOIs | |
Publication status | Published - 2019 Sept 1 |
Externally published | Yes |
Keywords
- examinee-selected items
- missing not at random
- rater severity
ASJC Scopus subject areas
- Social Sciences (miscellaneous)
- Psychology (miscellaneous)