Publication Date
| In 2026 | 3 |
| Since 2025 | 472 |
| Since 2022 (last 5 years) | 2430 |
| Since 2017 (last 10 years) | 6610 |
| Since 2007 (last 20 years) | 18014 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 2140 |
| Teachers | 1218 |
| Researchers | 1054 |
| Administrators | 485 |
| Policymakers | 455 |
| Students | 176 |
| Parents | 147 |
| Counselors | 100 |
| Community | 61 |
| Media Staff | 17 |
| Support Staff | 15 |
| More ▼ | |
Location
| Canada | 784 |
| Australia | 690 |
| United States | 582 |
| California | 569 |
| United Kingdom | 479 |
| Texas | 413 |
| Florida | 403 |
| Germany | 392 |
| New York | 378 |
| United Kingdom (England) | 369 |
| China | 361 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 17 |
| Meets WWC Standards with or without Reservations | 22 |
| Does not meet standards | 21 |
National Assessment of Educational Progress (NAEP), 2024
The National Assessment of Educational Progress (NAEP) is an integral measure of academic progress across the nation and over time. It is the largest nationally representative and continuing assessment of what our nation's students know and can do in various subjects, such as civics, mathematics, reading, and U.S. history. The program also…
Descriptors: National Competency Tests, Preadolescents, Adolescents, Mathematics Achievement
Yannick Rothacher; Carolin Strobl – Journal of Educational and Behavioral Statistics, 2024
Random forests are a nonparametric machine learning method, which is currently gaining popularity in the behavioral sciences. Despite random forests' potential advantages over more conventional statistical methods, a remaining question is how reliably informative predictor variables can be identified by means of random forests. The present study…
Descriptors: Predictor Variables, Selection Criteria, Behavioral Sciences, Reliability
Hwanggyu Lim; Danqi Zhu; Edison M. Choe; Kyung T. Han – Journal of Educational Measurement, 2024
This study presents a generalized version of the residual differential item functioning (RDIF) detection framework in item response theory, named GRDIF, to analyze differential item functioning (DIF) in multiple groups. The GRDIF framework retains the advantages of the original RDIF framework, such as computational efficiency and ease of…
Descriptors: Item Response Theory, Test Bias, Test Reliability, Test Construction
Pauline Frizelle; Ana Buckley; Tricia Biancone; Anna Ceroni; Darren Dahly; Paul Fletcher; Dorothy V. M. Bishop; Cristina McKean – Journal of Child Language, 2024
This study reports on the feasibility of using the Test of Complex Syntax- Electronic (TECS-E), as a self-directed app, to measure sentence comprehension in children aged 4 to 5 ½ years old; how testing apps might be adapted for effective independent use; and agreement levels between face-to-face supported computerized and independent computerized…
Descriptors: Language Processing, Computer Software, Language Tests, Syntax
Steffen Erickson – Society for Research on Educational Effectiveness, 2024
Background: Structural Equation Modeling (SEM) is a powerful and broadly utilized statistical framework. Researchers employ these models to dissect relationships into direct, indirect, and total effects (Bollen, 1989). These models unpack the "black box" issues within cause-and-effect studies by examining the underlying theoretical…
Descriptors: Structural Equation Models, Causal Models, Research Methodology, Error of Measurement
Roderick J. Little; James R. Carpenter; Katherine J. Lee – Sociological Methods & Research, 2024
Missing data are a pervasive problem in data analysis. Three common methods for addressing the problem are (a) complete-case analysis, where only units that are complete on the variables in an analysis are included; (b) weighting, where the complete cases are weighted by the inverse of an estimate of the probability of being complete; and (c)…
Descriptors: Foreign Countries, Probability, Robustness (Statistics), Responses
Kim, Sooyeon; Walker, Michael – ETS Research Report Series, 2021
In this investigation, we used real data to assess potential differential effects associated with taking a test in a test center (TC) versus testing at home using remote proctoring (RP). We used a pseudo-equivalent groups (PEG) approach to examine group equivalence at the item level and the total score level. If our assumption holds that the PEG…
Descriptors: Testing, Distance Education, Comparative Analysis, Test Items
Ocak, Gürbüz; Karakus, Gülçin – Themes in eLearning, 2021
The coronavirus pandemic, which affected every aspect of life around the world, has led to radical changes in teaching and learning methods. It is no longer healthy for students being together for a long time in classroom. For this reason, online education applications have started to be implemented rapidly around the world. Not only the education…
Descriptors: Undergraduate Students, Student Attitudes, Distance Education, Computer Assisted Testing
Wise, Steven L.; Soland, James; Dupray, Laurence M. – Journal of Applied Testing Technology, 2021
Technology-Enhanced Items (TEIs) have been purported to be more motivating and engaging to test takers than traditional multiple-choice items. The claim of enhanced engagement, however, has thus far received limited research attention. This study examined the rates of rapid-guessing behavior received by three types of items (multiple-choice,…
Descriptors: Test Items, Guessing (Tests), Multiple Choice Tests, Achievement Tests
Headrick, Jonathon; Harris-Reeves, Brooke; Daly-Olm, Talei – Journal of University Teaching and Learning Practice, 2021
Collaborative testing is recognised as an effective assessment approach linked to positive student outcomes including enhanced test performance and reduced assessment anxiety. While collaborative testing approaches appear beneficial to university students in general, it is unclear whether students from different year levels benefit to the same…
Descriptors: College Freshmen, Undergraduate Students, Foreign Countries, Student Attitudes
Salmani Nodoushan, Mohammad Ali – Online Submission, 2021
This paper follows a line of logical argumentation to claim that what Samuel Messick conceptualized about construct validation has probably been misunderstood by some educational policy makers, practicing educators, and classroom teachers. It argues that, while Messick's unified theory of test validation aimed at (a) warning educational…
Descriptors: Construct Validity, Test Theory, Test Use, Affordances
Ayfer Sayin; Sabiha Bozdag; Mark J. Gierl – International Journal of Assessment Tools in Education, 2023
The purpose of this study is to generate non-verbal items for a visual reasoning test using templated-based automatic item generation (AIG). The fundamental research method involved following the three stages of template-based AIG. An item from the 2016 4th-grade entrance exam of the Science and Art Center (known as BILSEM) was chosen as the…
Descriptors: Test Items, Test Format, Nonverbal Tests, Visual Measures
New York State Education Department, 2022
The instructions in this manual explain the responsibilities of school administrators for the New York State Testing Program (NYSTP) Grades 3-8 English Language Arts and Mathematics Paper-Based Field Tests. School administrators must be thoroughly familiar with the contents of the manual, and the policies and procedures must be followed as written…
Descriptors: Testing Programs, Mathematics Tests, Test Format, Computer Assisted Testing
Clements, Douglas H.; Sarama, Julie; Tatsuoka, Curtis; Banse, Holland; Tatsuoka, Kikumi – Journal of Research in Childhood Education, 2022
We report on an innovative computer-adaptive assessment, the Comprehensive Research-based Early Math Ability Test (CREMAT), using the case of 1st- and 2nd-graders' understanding of geometric measurement. CREMAT was developed with multiple aims in mind, including: (1) be administered with a reasonable number of items, (2) identify the level(s) of…
Descriptors: Cognitive Tests, Diagnostic Tests, Adaptive Testing, Computer Assisted Testing
Computerized Adaptive Assessment of Understanding of Programming Concepts in Primary School Children
Hogenboom, Sally A. M.; Hermans, Felienne F. J.; Van der Maas, Han L. J. – Computer Science Education, 2022
Background and Context: Valid assessment of understanding of programming concepts in primary school children is essential to implement and improve programming education. Objective: We developed and validated the Computerized Adaptive Programming Concepts Test (CAPCT) with a novel application of Item Response Theory. The CAPCT is a web-based and…
Descriptors: Computer Assisted Testing, Adaptive Testing, Programming, Knowledge Level

Peer reviewed
Direct link
