A new item response theory model for rater centrality using a hierarchical rater model approach

Xue Lan Qiu*, Ming Ming Chiu, Wen Chung Wang, P. H. Chen

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)


Rater centrality, in which raters overuse middle scores for rating, is a common rater error which can affect test scores and subsequent decisions. Past studies on rater errors have focused on rater severity and inconsistency, neglecting rater centrality. This study proposes a new model within the hierarchical rater model framework to explicitly specify and directly estimate rater centrality in addition to rater severity and inconsistency. Simulations were conducted using the freeware JAGS to evaluate the parameter recovery of the new model and the consequences of ignoring rater centrality. The results revealed that the model had good parameter recovery with small bias, low root mean square errors, and high test score reliability, especially when a fully crossed linking design was used. Ignoring centrality yielded poor item difficulty estimates, person ability estimates, rater errors estimates, and underestimated reliability. We also showcase how the new model can be used, using an empirical example involving English essays in the Advanced Placement exam.

Original languageEnglish
Pages (from-to)1854-1868
Number of pages15
JournalBehavior Research Methods
Issue number4
Publication statusPublished - 2022 Aug


  • Centrality effect
  • Hierarchical rater model
  • Item response theory
  • Rater errors

ASJC Scopus subject areas

  • Experimental and Cognitive Psychology
  • Developmental and Educational Psychology
  • Arts and Humanities (miscellaneous)
  • Psychology (miscellaneous)
  • General Psychology


Dive into the research topics of 'A new item response theory model for rater centrality using a hierarchical rater model approach'. Together they form a unique fingerprint.

Cite this