Explanatory Item Response Modeling of A Reading Comprehension Assessment

dc.contributor.advisorScalise, Kathleen
dc.contributor.authorPark, Sunhi
dc.date.accessioned2020-09-24T17:42:29Z
dc.date.issued2020-09-24
dc.description.abstractThis dissertation study intended to explore the validity of items by modeling the relationship between the item and person properties and item difficulty a reading comprehension assessment, the Multiple-choice Online Causal Coherence Assessment (MOCCA). This study used explanatory item response modeling (EIRM) which was useful to help explain how the properties were associated with responses to items. Results from the linear logistic test model analysis indicate that the item difficulty was significantly associated with text complexity and story features. An item was more difficult if a text passage in a given item was longer with less familiar and less concrete words in a less familiar topic, and if the passage was not child-centered and/or realistic, having the goal and a main character in the same sentence, implying a goal in the later sentence, and having a goal not met with no positive emotion in the end. Results from the late regression analysis indicate that race/ethnicity represented as white/non-white, socioeconomic status represented as free and reduced meals, special education participation, and EL status were statistically significantly related to the responses on the MOCCA items. English learners among subgroups were unique in that they were assessed on the same reading comprehension assessment as non-ELs. Their different performance was easily ascribed to their limited English language proficiency. To explore whether any items on MOCCA might involve construct-irrelevant factors beyond the language differences between ELs and non-ELs, twelve items were identified with showing different group responses through IRT-based Differential Item Functioning (DIF) analysis. Results indicate that the twelve DIF items did not show particularly distinctive text features compared to no DIF items when they were reviewed by means of text complexity and story features both of which were used as predictors in the LLTM analysis. The findings from DIF analysis served to detect whether a given assessment measured an intended construct equally for all subgroups, and whether there were any items indicative of unexpected behavior on the assessment. The results provided information of the characteristics of items which are related to test validity as well as fairness.en_US
dc.description.embargo2022-08-17
dc.identifier.urihttps://hdl.handle.net/1794/25703
dc.language.isoen_US
dc.publisherUniversity of Oregon
dc.rightsAll Rights Reserved.
dc.subjectDifferential Item Functioningen_US
dc.subjectEnglish Learnersen_US
dc.subjectExplanatory Item Response Modelingen_US
dc.subjectReading Comprehension Assessmenten_US
dc.subjectStory featuresen_US
dc.subjectText difficultyen_US
dc.titleExplanatory Item Response Modeling of A Reading Comprehension Assessment
dc.typeElectronic Thesis or Dissertation
thesis.degree.disciplineDepartment of Educational Methodology, Policy, and Leadership
thesis.degree.grantorUniversity of Oregon
thesis.degree.leveldoctoral
thesis.degree.namePh.D.

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Park_oregon_0171A_12673.pdf
Size:
1.37 MB
Format:
Adobe Portable Document Format