Taking a Critical Look at Testing

Dr. Stephen Sireci is passionate about equity in educational testing. A professor in the College of Education’s Research, Educational Measurement, and Psychometrics Program (REMP) and director of the Center for Educational Assessment, Sireci researches test development and evaluation, particularly issues of validity, cross-lingual assessment, standard setting, and computer-based testing. His goal is to make testing more fair and more useful and less prone to misuse—creating tests that better measure students’ knowledge and skills, and helping administrators use tests more productively and not for purposes beyond which they were designed (such as evaluating public school teachers). Sireci has built an international reputation, publishing over 130 peer-reviewed articles and book chapters, securing multiple contacts and grants with U.S. Department of Education, the Educational Testing Service, the College Board, and Pearson Educational Measurement, attracting more than $10 million in external funding, and serving on dozens of national commission, blue-ribbon panels, and advisory committees.
Fittingly, it was a standardized test that started Sireci on his career in educational measurement. He planned to go into industrial and organizational psychology, but wasn’t accepted into the doctoral programs to which he applied—likely because his GRE scores weren’t high enough. He had taken a course in research methods and statistics as a master’s student, and having found that particularly interesting, he applied to a psychometrics doctoral program at Fordham University as well, and was accepted there.
While working on his doctorate, Sireci took a job as research supervisor of testing at the Board of Education in Newark, New Jersey. “That was really enlightening,” he recalls. “My job was to analyze some of the test data and report back to the federal government as part of their Title I evaluation. I got to see inner city education up close and I got to see how people were using the data that was supposed to describe the situation I was seeing.”
A test is not inherently valid or inherently invalid, what matters is what you are using it for.
The assessment of English learners or linguistic minorities in general is one of the most difficult problems we face. Anytime someone’s taken a test in a language that is not their dominant language, its very hard to say you’ve got a good measure that’s sufficient.
As educational assessment advances, Sireci is hoping to see the field take a very critical look at the positive and negative sides of testing. He asserts that testing must work in concert with instructors and curriculum, and that this was lost with national policy changes over the last 20 years. “When no child left behind came out, the first few years was probably one of the best periods in education measurement, in that the tests were providing information that people were paying attention to, especially about achievement gaps,” Sireci explains. “And then when Race to the Top came out, they tied test scores to evaluating teachers, which might sound like a good idea, but it was a terrible idea. And one of the consequences of that is we lost the teaching community. We lost the buy-in of the teachers. For educational tests to contribute to education they have to be aligned with instruction and aligned with the curriculum. If the assessment folk are not in full partnership with the instructional folk, that falls apart.”
For educational tests to contribute to education they have to be aligned with instruction and aligned with the curriculum. If the assessment folk are not in full partnership with the instructional folk, that falls apart.