This paper describes an automated scorer for assessing students' Creative Problem-Solving (CPS) abilities via modeling the intra-structure of students' essays describing their thoughts on solving particular problems. The automated scorer aims to grade students' open-ended responses to an essay-question-type CPS ability test, instead of using typical Likert-type or multiple-choice questions that may be imperfect to assess the creative perspective of human problem-solving. The scorer is distinguishable to most generic automated essay scoring systems that a bipartite graphbased representation is explicitly built for the pair-wise relation between a student's ideas and self-explained reasons for a CPS task. This design will enable several analytical approaches for CPS, such as quantitative scoring and qualitative diagnoses. The preliminary empirical evaluation with 20 students' data shows that the scoring results of the scorer is satisfactory and highly correlated with those of human experts (Pearson's r=.67∼.82) in terms of quantitative scoring task. The approach provides a promising solution to support large-scaled studies on human creativity and may further enable CPS-aware personalization systems.