NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 24 results Save | Export
Susan Kowalski; Megan Kuhfeld; Scott Peters; Gustave Robinson; Karyn Lewis – NWEA, 2024
The purpose of this technical appendix is to share detailed results and more fully describe the sample and methods used to produce the research brief, "COVID's Impact on Science Achievement: Trends from 2019 through 2024. We investigated three main research questions in this brief: 1) How did science achievement in 2021 and 2024 compare to…
Descriptors: COVID-19, Pandemics, Science Achievement, Trend Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hastedt, Dirk; Desa, Deana – Practical Assessment, Research & Evaluation, 2015
This simulation study was prompted by the current increased interest in linking national studies to international large-scale assessments (ILSAs) such as IEA's TIMSS, IEA's PIRLS, and OECD's PISA. Linkage in this scenario is achieved by including items from the international assessments in the national assessments on the premise that the average…
Descriptors: Case Studies, Simulation, International Programs, Testing Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Oliveri, María Elena; Ercikan, Kadriye; Zumbo, Bruno D.; Lawless, René – International Journal of Testing, 2014
In this study, we contrast results from two differential item functioning (DIF) approaches (manifest and latent class) by the number of items and sources of items identified as DIF using data from an international reading assessment. The latter approach yielded three latent classes, presenting evidence of heterogeneity in examinee response…
Descriptors: Test Bias, Comparative Analysis, Reading Tests, Effect Size
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ulu, Mustafa; Akar, Cüneyt – Educational Research and Reviews, 2016
This study aims at determining to what extent visuals contribute to success in non-routine problem solving process and what types of errors are made when solving with visuals. Comparative model was utilized for identifying the effect of visuals on student achievement, and clinical interview technique was used to determine the types of errors. In…
Descriptors: Problem Solving, Elementary School Students, Grade 4, Mathematics
Wagemaker, Hans, Ed. – International Association for the Evaluation of Educational Achievement, 2020
Although International Association for the Evaluation of Educational Achievement-pioneered international large-scale assessment (ILSA) of education is now a well-established science, non-practitioners and many users often substantially misunderstand how large-scale assessments are conducted, what questions and challenges they are designed to…
Descriptors: International Assessment, Achievement Tests, Educational Assessment, Comparative Analysis
Singer, Judith D., Ed.; Braun, Henry I., Ed.; Chudowsky, Naomi, Ed. – National Academy of Education, 2018
Results from international large-scale assessments (ILSAs) garner considerable attention in the media, academia, and among policy makers. Although there is widespread recognition that ILSAs can provide useful information, there is debate about what types of comparisons are the most meaningful and what could be done to assure more sound…
Descriptors: International Education, Educational Assessment, Educational Policy, Data Interpretation
Peer reviewed Peer reviewed
Direct linkDirect link
Yang, Kuay-Keng; Lin, Shu-Fen; Hong, Zuway-R; Lin, Huann-shyang – Creativity Research Journal, 2016
The purposes of this study were to (a) develop and validate instruments to assess elementary students' scientific creativity and science inquiry, (b) investigate the relationship between the two competencies, and (c) compare the two competencies among different grade level students. The scientific creativity test was composed of 7 open-ended items…
Descriptors: Elementary School Students, Elementary School Science, Creativity, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Jensen, Nate; Rice, Andrew; Soland, James – Educational Evaluation and Policy Analysis, 2018
While most educators assume that not all students try their best on achievement tests, no current research examines if behaviors associated with low test effort, like rapidly guessing on test items, affect teacher value-added estimates. In this article, we examined the prevalence of rapid guessing to determine if this behavior varied by grade,…
Descriptors: Item Response Theory, Value Added Models, Achievement Tests, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Beretvas, S. Natasha; Cawthon, Stephanie W.; Lockhart, L. Leland; Kaye, Alyssa D. – Educational and Psychological Measurement, 2012
This pedagogical article is intended to explain the similarities and differences between the parameterizations of two multilevel measurement model (MMM) frameworks. The conventional two-level MMM that includes item indicators and models item scores (Level 1) clustered within examinees (Level 2) and the two-level cross-classified MMM (in which item…
Descriptors: Test Bias, Comparative Analysis, Test Items, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Ferjencík, Ján; Slavkovská, Miriam; Kresila, Juraj – Journal of Pedagogy, 2015
The paper reports on the adaptation of a D-KEFS test battery for Slovakia. Drawing on concrete examples, it describes and illustrates the key issues relating to the transfer of test items from one socio-cultural environment to another. The standardisation sample of the population of Slovak pupils in the fourth year of primary school included 250…
Descriptors: Executive Function, Foreign Countries, Test Construction, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Young-Sun; Park, Yoon Soo; Taylan, Didem – International Journal of Testing, 2011
Studies of international mathematics achievement such as the Trends in Mathematics and Science Study (TIMSS) have employed classical test theory and item response theory to rank individuals within a latent ability continuum. Although these approaches have provided insights into comparisons between countries, they have yet to examine how specific…
Descriptors: Mathematics Achievement, Achievement Tests, Models, Cognitive Measurement
Foy, Pierre, Ed.; Arora, Alka, Ed.; Stanco, Gabrielle M., Ed. – International Association for the Evaluation of Educational Achievement, 2013
This supplement describes national adaptations made to the international version of the TIMSS 2011 background questionnaires. This information provides users with a guide to evaluate the availability of internationally comparable data for use in secondary analyses involving the TIMSS 2011 background variables. Background questionnaire adaptations…
Descriptors: Questionnaires, Technology Transfer, Adoption (Ideas), Media Adaptation
Foy, Pierre, Ed.; Arora, Alka, Ed.; Stanco, Gabrielle M., Ed. – International Association for the Evaluation of Educational Achievement, 2013
The TIMSS 2011 International Database includes data for all questionnaires administered as part of the TIMSS 2011 assessment. This supplement contains the international version of the TIMSS 2011 background questionnaires and curriculum questionnaires in the following 10 sections: (1) Fourth Grade Student Questionnaire; (2) Fourth Grade Home…
Descriptors: Background, Questionnaires, Test Items, Grade 4
Peer reviewed Peer reviewed
Direct linkDirect link
Sparfeldt, Jorn R.; Kimmel, Rumena; Lowenkamp, Lena; Steingraber, Antje; Rost, Detlef H. – Educational Assessment, 2012
Multiple-choice (MC) reading comprehension test items comprise three components: text passage, questions about the text, and MC answers. The construct validity of this format has been repeatedly criticized. In three between-subjects experiments, fourth graders (N[subscript 1] = 230, N[subscript 2] = 340, N[subscript 3] = 194) worked on three…
Descriptors: Test Items, Reading Comprehension, Construct Validity, Grade 4
Peer reviewed Peer reviewed
Direct linkDirect link
Braeken, Johan; Blömeke, Sigrid – Assessment & Evaluation in Higher Education, 2016
Using data from the international Teacher Education and Development Study: Learning to Teach Mathematics (TEDS-M), the measurement equivalence of teachers' beliefs across countries is investigated for the case of "mathematics-as-a fixed-ability". Measurement equivalence is a crucial topic in all international large-scale assessments and…
Descriptors: Comparative Analysis, Bayesian Statistics, Test Bias, Teacher Education
Previous Page | Next Page »
Pages: 1  |  2