Publication Date
| In 2026 | 0 |
| Since 2025 | 197 |
| Since 2022 (last 5 years) | 1067 |
| Since 2017 (last 10 years) | 2577 |
| Since 2007 (last 20 years) | 4938 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 225 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 65 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
Kuo, Bor-Chen; Chen, Chun-Hua; Yang, Chih-Wei; Mok, Magdalena Mo Ching – Educational Psychology, 2016
Traditionally, teachers evaluate students' abilities via their total test scores. Recently, cognitive diagnostic models (CDMs) have begun to provide information about the presence or absence of students' skills or misconceptions. Nevertheless, CDMs are typically applied to tests with multiple-choice (MC) items, which provide less diagnostic…
Descriptors: Multiple Choice Tests, Responses, Test Items, Models
Orosco, Michael J. – International Journal of Science and Mathematics Education, 2016
The psychometric properties of a 10-item math motivation scale were empirically validated with an independent sample consisting of 182 elementary-school students. Analysis of the model dimensionality supported a one-factor structure fit. Item parameter estimates from a Classical Test Theory framework revealed that most items were highly…
Descriptors: Psychometrics, Student Motivation, Mathematics Instruction, Elementary School Students
Starns, Jeffrey J.; Ksander, John C. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2016
Increasing the number of study trials creates a crossover pattern in source memory zROC slopes; that is, the slope is either below or above 1 depending on which source receives stronger learning. This pattern can be produced if additional learning affects memory processes such as the relative contribution of recollection and familiarity to source…
Descriptors: Memory, Learning Processes, Familiarity, Decision Making
Demir, Papatya; Avgin, Sakine S. – Journal of Education and Practice, 2016
Insensitivity to environmental pollution and the environment has become a wide-ranging problem recently. One of the most important reasons for confronting with such a problem is due to the fact that individuals see the nature as a boundless resource. To composing favorable behavior to the living area, teachers are required to be competent with the…
Descriptors: Climate, Science Teachers, Preservice Teachers, Pollution
Rakkapao, Suttida; Prasitpong, Singha; Arayathanitkul, Kwan – Physical Review Physics Education Research, 2016
This study investigated the multiple-choice test of understanding of vectors (TUV), by applying item response theory (IRT). The difficulty, discriminatory, and guessing parameters of the TUV items were fit with the three-parameter logistic model of IRT, using the parscale program. The TUV ability is an ability parameter, here estimated assuming…
Descriptors: Item Response Theory, Multiple Choice Tests, Difficulty Level, Test Items
Barniol, Pablo; Zavala, Genaro – Physical Review Physics Education Research, 2016
In this article we present several modifications of the mechanical waves conceptual survey, the most important test to date that has been designed to evaluate university students' understanding of four main topics in mechanical waves: propagation, superposition, reflection, and standing waves. The most significant changes are (i) modification of…
Descriptors: Physics, Test Construction, Science Tests, College Students
Kelcey, Ben; Wang, Shanshan; Cox, Kyle – Society for Research on Educational Effectiveness, 2016
Valid and reliable measurement of unobserved latent variables is essential to understanding and improving education. A common and persistent approach to assessing latent constructs in education is the use of rater inferential judgment. The purpose of this study is to develop high-dimensional explanatory random item effects models designed for…
Descriptors: Test Items, Models, Evaluators, Longitudinal Studies
Lazarus, Sheryl S.; Heritage, Margaret – National Center on Educational Outcomes, 2016
The new large-scale assessments rolled out by consortia and states are designed to measure student achievement of rigorous college- and career-ready (CCR) standards. Recent surveys of teachers in several states indicate that students with disabilities adjusted well to the new assessments, and liked many of their features, but that there also are…
Descriptors: Measurement, College Readiness, Career Readiness, Academic Achievement
Lessne, Deborah; Cidade, Melissa – National Center for Education Statistics, 2016
This report outlines the development, methodology, and results of the split-half administration of the 2015 School Crime Supplement (SCS) to the National Crime Victimization Survey (NCVS). The National Crime Victimization Survey (NCVS) is sponsored by the U.S. Department of Justice, Bureau of Justice Statistics (BJS). The U.S. Census Bureau…
Descriptors: National Surveys, Victims of Crime, School Safety, Crime
An Application of a Random Mixture Nominal Item Response Model for Investigating Instruction Effects
Choi, Hye-Jeong; Cohen, Allan S.; Bottge, Brian A. – Grantee Submission, 2016
The purpose of this study was to apply a random item mixture nominal item response model (RIM-MixNRM) for investigating instruction effects. The host study design was a pre-test-and-post-test, school-based cluster randomized trial. A RIM-MixNRM was used to identify students' error patterns in mathematics at the pre-test and the post-test.…
Descriptors: Item Response Theory, Instructional Effectiveness, Test Items, Models
Kopf, Julia; Zeileis, Achim; Strobl, Carolin – Educational and Psychological Measurement, 2015
Differential item functioning (DIF) indicates the violation of the invariance assumption, for instance, in models based on item response theory (IRT). For item-wise DIF analysis using IRT, a common metric for the item parameters of the groups that are to be compared (e.g., for the reference and the focal group) is necessary. In the Rasch model,…
Descriptors: Test Items, Equated Scores, Test Bias, Item Response Theory
Choi, In-Hee; Wilson, Mark – Educational and Psychological Measurement, 2015
An essential feature of the linear logistic test model (LLTM) is that item difficulties are explained using item design properties. By taking advantage of this explanatory aspect of the LLTM, in a mixture extension of the LLTM, the meaning of latent classes is specified by how item properties affect item difficulties within each class. To improve…
Descriptors: Classification, Test Items, Difficulty Level, Statistical Analysis
Cox, Troy L.; Bown, Jennifer; Burdis, Jacob – Foreign Language Annals, 2015
This study investigates the effect of proficiency- vs. performance-based elicited imitation (EI) assessment. EI requires test-takers to repeat sentences in the target language. The accuracy at which test-takers are able to repeat sentences highly correlates with test-takers' language proficiency. However, in EI, the factors that render an item…
Descriptors: Language Proficiency, Imitation, Sentences, Correlation
Zou, Min; Wu, Wenxin – English Language Teaching, 2015
Since its first pilot study was launched in 2003, China Accreditation Test for Translators and Interpreters (CATTI) has developed into the most authoritative translation and interpretation proficiency qualification accreditation test in China and played an important role in assessing and cultivating translators and interpreters. Based on the…
Descriptors: Foreign Countries, Translation, Test Validity, Test Reliability
Cheng, Ying; Patton, Jeffrey M.; Shao, Can – Educational and Psychological Measurement, 2015
a-Stratified computerized adaptive testing with b-blocking (AST), as an alternative to the widely used maximum Fisher information (MFI) item selection method, can effectively balance item pool usage while providing accurate latent trait estimates in computerized adaptive testing (CAT). However, previous comparisons of these methods have treated…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Item Banks

Peer reviewed
Direct link
