Publication Date
| In 2026 | 6 |
| Since 2025 | 2195 |
| Since 2022 (last 5 years) | 12710 |
| Since 2017 (last 10 years) | 33835 |
| Since 2007 (last 20 years) | 68326 |
Descriptor
| Foreign Countries | 30532 |
| Test Validity | 21728 |
| Scores | 18248 |
| Academic Achievement | 16912 |
| Test Construction | 16738 |
| Test Reliability | 15015 |
| Achievement Tests | 14839 |
| Standardized Tests | 14712 |
| Comparative Analysis | 14429 |
| Elementary Secondary Education | 13038 |
| Language Tests | 12549 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 5034 |
| Teachers | 3391 |
| Researchers | 2630 |
| Policymakers | 1229 |
| Administrators | 976 |
| Students | 687 |
| Parents | 325 |
| Counselors | 216 |
| Community | 162 |
| Support Staff | 50 |
| Media Staff | 34 |
| More ▼ | |
Location
| Turkey | 2815 |
| Australia | 2426 |
| Canada | 2269 |
| California | 1853 |
| United States | 1725 |
| Texas | 1615 |
| China | 1578 |
| United Kingdom | 1315 |
| Florida | 1312 |
| United Kingdom (England) | 1202 |
| Germany | 1121 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 121 |
| Meets WWC Standards with or without Reservations | 189 |
| Does not meet standards | 174 |
Conoyer, Sarah J.; Therrien, William J.; White, Kristen K. – Assessment for Effective Intervention, 2022
Meta-analysis was used to examine curriculum-based measurement in the content areas of social studies and science. Nineteen studies between the years of 1998 and 2020 were reviewed to determine overall mean correlation for criterion validity and examine alternate-form reliability and slope coefficients. An overall mean correlation of 0.59 was…
Descriptors: Curriculum Based Assessment, Test Validity, Test Reliability, Science Tests
Menold, Natalja; Raykov, Tenko – Educational and Psychological Measurement, 2022
The possible dependency of criterion validity on item formulation in a multicomponent measuring instrument is examined. The discussion is concerned with evaluation of the differences in criterion validity between two or more groups (populations/subpopulations) that have been administered instruments with items having differently formulated item…
Descriptors: Test Items, Measures (Individuals), Test Validity, Difficulty Level
Olney, Andrew M. – Grantee Submission, 2022
Multi-angle question answering models have recently been proposed that promise to perform related tasks like question generation. However, performance on related tasks has not been thoroughly studied. We investigate a leading model called Macaw on the task of multiple choice question generation and evaluate its performance on three angles that…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, Models
Kim, Sooyeon; Walker, Michael E. – Educational Measurement: Issues and Practice, 2022
Test equating requires collecting data to link the scores from different forms of a test. Problems arise when equating samples are not equivalent and the test forms to be linked share no common items by which to measure or adjust for the group nonequivalence. Using data from five operational test forms, we created five pairs of research forms for…
Descriptors: Ability, Tests, Equated Scores, Testing Problems
Zyluk, Natalia; Karpe, Karolina; Urbanski, Mariusz – SAGE Open, 2022
The aim of this paper is to describe the process of modification of the research tool designed for measuring the development of personal epistemology--"Standardized Epistemological Understanding Assessment" (SEUA). SEUA was constructed as an improved version of the instrument initially proposed by Kuhn et al. SEUA was proved to be a more…
Descriptors: Epistemology, Research Tools, Beliefs, Test Items
Abu-Ghazalah, Rashid M.; Dubins, David N.; Poon, Gregory M. K. – Applied Measurement in Education, 2023
Multiple choice results are inherently probabilistic outcomes, as correct responses reflect a combination of knowledge and guessing, while incorrect responses additionally reflect blunder, a confidently committed mistake. To objectively resolve knowledge from responses in an MC test structure, we evaluated probabilistic models that explicitly…
Descriptors: Guessing (Tests), Multiple Choice Tests, Probability, Models
Olsho, Alexis; Smith, Trevor I.; Eaton, Philip; Zimmerman, Charlotte; Boudreaux, Andrew; White Brahmia, Suzanne – Physical Review Physics Education Research, 2023
We developed the Physics Inventory of Quantitative Literacy (PIQL) to assess students' quantitative reasoning in introductory physics contexts. The PIQL includes several "multiple-choice-multipleresponse" (MCMR) items (i.e., multiple-choice questions for which more than one response may be selected) as well as traditional single-response…
Descriptors: Multiple Choice Tests, Science Tests, Physics, Measures (Individuals)
Michael Norman Voth – ProQuest LLC, 2023
The purpose of this study is to examine how the COVID-19 pandemic has affected the learning of students in public education through the analysis of standardized assessment performance before and during COVID-19. The study also compares virtual learning and face-to-face students' change in performance on the standardized assessments. To answer the…
Descriptors: Standardized Tests, Scores, Academic Achievement, Middle School Students
Musa Adekunle Ayanwale – Discover Education, 2023
Examination scores obtained by students from the West African Examinations Council (WAEC), and National Business and Technical Examinations Board (NABTEB) may not be directly comparable due to differences in examination administration, item characteristics of the subject in question, and student abilities. For more accurate comparisons, scores…
Descriptors: Equated Scores, Mathematics Tests, Test Items, Test Format
Shin, Jinnie; Gierl, Mark J. – International Journal of Testing, 2022
Over the last five years, tremendous strides have been made in advancing the AIG methodology required to produce items in diverse content areas. However, the one content area where enormous problems remain unsolved is language arts, generally, and reading comprehension, more specifically. While reading comprehension test items can be created using…
Descriptors: Reading Comprehension, Test Construction, Test Items, Natural Language Processing
Kirya, Kent Robert; Mashood, Kalarattu Kandiyi; Yadav, Lakhan Lal – Journal of Turkish Science Education, 2022
In this study, we administered and evaluated circular motion concept question items with a view to developing an inventory suitable for the Ugandan context. Before administering the circular concept items, six physics experts and ten undergraduate physics students carried out the face and content validation. One hundred eighteen undergraduate…
Descriptors: Motion, Scientific Concepts, Test Construction, Test Items
Merchant, Stefan; Rich, Jessica; Klinger, Don A. – Canadian Journal of Educational Administration and Policy, 2022
Both school and district administrators use the results of standardized, large-scale tests to inform decisions about the need for, or success of, educational programs and interventions. However, test results at the school level are subject to random fluctuations due to changes in cohort, test items, and other factors outside of the school's…
Descriptors: Standardized Tests, Foreign Countries, Generalizability Theory, Scores
Panahi, Ali; Mohebbi, Hassan – Language Teaching Research Quarterly, 2022
High stakes testing, such as IELTS, is designed to select individuals for decision-making purposes (Fulcher, 2013b). Hence, there is a slow-growing stream of research investigating the subskills of IELTS listening and, in feedback terms, its effects on individuals and educational programs. Here, cognitive diagnostic assessment (CDA) performs it…
Descriptors: Decision Making, Listening Comprehension Tests, Multiple Choice Tests, Diagnostic Tests
New York State Education Department, 2022
The instructions in this manual explain the responsibilities of school administrators for the New York State Testing Program (NYSTP) Grades 3-8 English Language Arts and Mathematics Paper-Based Field Tests. School administrators must be thoroughly familiar with the contents of the manual, and the policies and procedures must be followed as written…
Descriptors: Testing Programs, Mathematics Tests, Test Format, Computer Assisted Testing
Sanne Unger; Alanna Lecher – Journal of Effective Teaching in Higher Education, 2024
This action research project sought to understand how giving students a choice in how to demonstrate mastery of a reading would affect both grades and evaluations of the instructor, given that assessment choice might increase student engagement. We examined the effect of student assessment choice on grades and course evaluations, the two…
Descriptors: College Faculty, College Students, Alternative Assessment, Test Selection

Peer reviewed
Direct link
