Faculty, Staff, and Student Presentations
|The Research, Educational Measurement and Psychometrics (REMP) program will be well represented at the upcoming AERA and NCME meetings in Philadelphia in April. A listing of ten current students and six faculty members and their research papers and presentations are identified below. They vary from topics focused on English language learners, adult education, scoring performance assessments, computer-based testing, test fairness, test score reporting, to studies comparing achievement in different countries around the world.
Diao, H. Using MDS to examine the internal structure of the Massachusetts Adult Proficiency Tests.
Fan, Fen., & Randall, Jennifer. Investigating the effects of social contexts and students' perceptions on math achievement in Singapore and USA.
Faulkner-Bond, Molly. Academic language and academic performance: A multi-level study of ELs.
Gandara, Fernanda, & Randall, Jennifer. Investigating the relationship between school-level accountability and science achievement across four countries: Australia, Korea, Portugal, and the United States.
Gandara, Fernanda. Evidence-centered design in large-scale assessments of English Language Learners.
Hambleton, Ronald. Applying IRT models to assessment in higher education. (Discussant)
Keller, Lisa, Zenisky, April, & Wang, Xi. De-constructing constructs: Evaluating stability of higher-order thinking across technology-rich scenarios.
Khademi, Vahab. Examining peer human rater rubric drift in automated essay scoring.
Marland, Joshua. Age matters: grouping approaches for detecting differential item functioning due to age.
Rios, Joseph. Motivation issues in formative assessment across classrooms and schools.
Rios, Joseph. Does motivational instruction affect college students' performance on low-stakes assessment?
Rios, Joseph. Linking cross-lingual achievement tests in the United States: A need for improved practices.
Rios, Joseph. Identifying unmotivated examinees on student learning outcomes assessment: A comparison of two approaches.
Shin, Minjeong. An investigation of small sample equating methods.
Shin, Minjeong, Wang, Xi, Khademi, Vahab, Faulkner-Bond, Molly, Zenisky, April, & Sireci, Steve. Interactive score reports for state assessments: Practices and directions.
Shin, Minjeong, Wells, Craig, & Randall, Jennifer. Model selection, fit, and the invariance property in IRT.
Sireci, Steve. A validity framework for a scientific knowledge of teaching test.
Sireci, Steve. On the validity of useless tests.
Sireci, Steve. Evaluating computer-based test accommodations for English learners.
Sireci, Steve. Technology enhanced items for large-scale assessment. (Discussant)
Sireci, Steve, & Faulkner-Bond, Molly. Validity issues in the assessment of English learners.
Sireci, Steve, & Wells, Craig. Using internal structure evidence to evaluate accommodated tests.
Wang, Xi & Hambleton, Ronald. Detecting drifted polytomous items: Using global versus step difficulty parameters.
Wang, Xi, Rios, Joseph, & Sireci, Steve. Detecting test speededness using multidimensional scaling.
Wells, Craig, Sireci, Steve, Bahry, Louise., & Fan, Fen. The effect of conditioning years on the reliability of student growth percentiles.
Zenisky, April. Applying lessons learned in educational score reporting to credentialing.
Zenisky, April & Hambleton, Ronald. New developments with score reporting.