NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Sachse, Karoline A.; Haag, Nicole – Applied Measurement in Education, 2017
Standard errors computed according to the operational practices of international large-scale assessment studies such as the Programme for International Student Assessment's (PISA) or the Trends in International Mathematics and Science Study (TIMSS) may be biased when cross-national differential item functioning (DIF) and item parameter drift are…
Descriptors: Error of Measurement, Test Bias, International Assessment, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Grund, Simon; Lüdtke, Oliver; Robitzsch, Alexander – Journal of Educational and Behavioral Statistics, 2021
Large-scale assessments (LSAs) use Mislevy's "plausible value" (PV) approach to relate student proficiency to noncognitive variables administered in a background questionnaire. This method requires background variables to be completely observed, a requirement that is seldom fulfilled. In this article, we evaluate and compare the…
Descriptors: Data Analysis, Error of Measurement, Research Problems, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Rutkowski, Leslie – Applied Measurement in Education, 2014
Large-scale assessment programs such as the National Assessment of Educational Progress (NAEP), Trends in International Mathematics and Science Study (TIMSS), and Programme for International Student Assessment (PISA) use a sophisticated assessment administration design called matrix sampling that minimizes the testing burden on individual…
Descriptors: Measurement, Testing, Item Sampling, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Sachse, Karoline A.; Roppelt, Alexander; Haag, Nicole – Journal of Educational Measurement, 2016
Trend estimation in international comparative large-scale assessments relies on measurement invariance between countries. However, cross-national differential item functioning (DIF) has been repeatedly documented. We ran a simulation study using national item parameters, which required trends to be computed separately for each country, to compare…
Descriptors: Comparative Analysis, Measurement, Test Bias, Simulation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Carnoy, Martin – National Education Policy Center, 2015
Stanford education professor Martin Carnoy examines four main critiques of how international test results are used in policymaking. Of particular interest are critiques of the policy analyses published by the Program for International Student Assessment (PISA). Using average PISA scores as a comparative measure of student achievement is misleading…
Descriptors: Criticism, Reputation, Test Validity, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Traynor, Anne; Raykov, Tenko – Comparative Education Review, 2013
In international achievement studies, questionnaires typically ask about the presence of particular household assets in students' homes. Responses to the assets questions are used to compute a total score, which is intended to represent household wealth in models of test performance. This study uses item analysis and confirmatory factor analysis…
Descriptors: Secondary School Students, Academic Achievement, Validity, Psychometrics