Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 7 |
Descriptor
Comparative Analysis | 14 |
Difficulty Level | 14 |
Test Theory | 14 |
Test Items | 11 |
Item Response Theory | 6 |
Achievement Tests | 3 |
Foreign Countries | 3 |
Models | 3 |
Test Validity | 3 |
Bayesian Statistics | 2 |
Comparative Education | 2 |
More ▼ |
Source
Author
Barrett, Frank | 1 |
Bretz, Stacey Lowery | 1 |
Brutten, Sheila R. | 1 |
Dirlik, Ezgi Mor | 1 |
Eaton, Philip | 1 |
Garrido, Mariquita | 1 |
Gialluca, Kathleen A. | 1 |
Guler, Nese | 1 |
Ilhan, Mustafa | 1 |
Iran-Nejad, Asghar | 1 |
Jansen, Margo G. H. | 1 |
More ▼ |
Publication Type
Reports - Research | 11 |
Journal Articles | 9 |
Speeches/Meeting Papers | 2 |
Collected Works - Proceedings | 1 |
Reports - Descriptive | 1 |
Reports - Evaluative | 1 |
Education Level
Grade 8 | 2 |
Elementary Secondary Education | 1 |
Grade 3 | 1 |
Grade 4 | 1 |
Grade 7 | 1 |
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Researchers | 1 |
Location
United States | 2 |
Netherlands | 1 |
Sweden | 1 |
Turkey | 1 |
United Kingdom (England) | 1 |
United Kingdom (Northern… | 1 |
United Kingdom (Wales) | 1 |
Virginia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Trends in International… | 2 |
Defining Issues Test | 1 |
What Works Clearinghouse Rating
Eaton, Philip; Johnson, Keith; Barrett, Frank; Willoughby, Shannon – Physical Review Physics Education Research, 2019
For proper assessment selection understanding the statistical similarities amongst assessments that measure the same, or very similar, topics is imperative. This study seeks to extend the comparative analysis between the brief electricity and magnetism assessment (BEMA) and the conceptual survey of electricity and magnetism (CSEM) presented by…
Descriptors: Test Theory, Item Response Theory, Comparative Analysis, Energy
Ilhan, Mustafa; Guler, Nese – Eurasian Journal of Educational Research, 2018
Purpose: This study aimed to compare difficulty indices calculated for open-ended items in accordance with the classical test theory (CTT) and the Many-Facet Rasch Model (MFRM). Although theoretical differences between CTT and MFRM occupy much space in the literature, the number of studies empirically comparing the two theories is quite limited.…
Descriptors: Difficulty Level, Test Items, Test Theory, Item Response Theory
Dirlik, Ezgi Mor – International Journal of Progressive Education, 2019
Item response theory (IRT) has so many advantages than its precedent Classical Test Theory (CTT) such as non-changing item parameters, ability parameter estimations free from the items. However, in order to get these advantages, some assumptions should be met and they are; unidimensionality, normality and local independence. However, it is not…
Descriptors: Comparative Analysis, Nonparametric Statistics, Item Response Theory, Models
Mitchell, Alison M.; Truckenmiller, Adrea; Petscher, Yaacov – Communique, 2015
As part of the Race to the Top initiative, the United States Department of Education made nearly 1 billion dollars available in State Educational Technology grants with the goal of ramping up school technology. One result of this effort is that states, districts, and schools across the country are using computerized assessments to measure their…
Descriptors: Computer Assisted Testing, Educational Technology, Testing, Efficiency
Bretz, Stacey Lowery; Linenberger, Kimberly J. – Biochemistry and Molecular Biology Education, 2012
Enzyme function is central to student understanding of multiple topics within the biochemistry curriculum. In particular, students must understand how enzymes and substrates interact with one another. This manuscript describes the development of a 15-item Enzyme-Substrate Interactions Concept Inventory (ESICI) that measures student understanding…
Descriptors: Biochemistry, Science Education, Science Instruction, Scientific Concepts
Wang, Jianjun – School Science and Mathematics, 2011
As the largest international study ever taken in history, the Trend in Mathematics and Science Study (TIMSS) has been held as a benchmark to measure U.S. student performance in the global context. In-depth analyses of the TIMSS project are conducted in this study to examine key issues of the comparative investigation: (1) item flaws in mathematics…
Descriptors: Test Items, Figurative Language, Item Response Theory, Benchmarking
Xu, Yuejin; Iran-Nejad, Asghar; Thoma, Stephen J. – Journal of Interactive Online Learning, 2007
The purpose of the study was to determine comparability of an online version to the original paper-pencil version of Defining Issues Test 2 (DIT2). This study employed methods from both Classical Test Theory (CTT) and Item Response Theory (IRT). Findings from CTT analyses supported the reliability and discriminant validity of both versions.…
Descriptors: Computer Assisted Testing, Test Format, Comparative Analysis, Test Theory

Ramsay, James O. – Psychometrika, 1989
An alternative to the Rasch model is introduced. It characterizes strength of response according to the ratio of ability and difficulty parameters rather than their difference. Joint estimation and marginal estimation models are applied to two test data sets. (SLD)
Descriptors: Ability, Bayesian Statistics, College Entrance Examinations, Comparative Analysis

Jansen, Margo G. H. – Journal of Educational Statistics, 1986
In this paper a Bayesian procedure is developed for the simultaneous estimation of the reading ability and difficulty parameters which are assumed to be factors in reading errors by the multiplicative Poisson Model. According to several criteria, the Bayesian estimates are better than comparable maximum likelihood estimates. (Author/JAZ)
Descriptors: Achievement Tests, Bayesian Statistics, Comparative Analysis, Difficulty Level
Brutten, Sheila R.; And Others – 1988
A study attempted to estimate the instructional sensitivity of items in three reading comprehension tests in English as a second language (ESL). Instructional sensitivity is a test-item construct defined as the tendency for a test item to vary in difficulty as a function of instruction. Similar tasks were given to readers at different proficiency…
Descriptors: College Students, Comparative Analysis, Difficulty Level, English (Second Language)
Gialluca, Kathleen A.; And Others – 1984
In this study, simulated and actual Air Force test data were used to compare the different procedures for equating mental tests: conventional (equipercentile and linear), Item Response Theory (IRT), and strong true-score theory (STST); data collection designs used were single-group, equivalent-groups, and anchor test. Equating transformations were…
Descriptors: Adults, Cognitive Ability, Cognitive Tests, Comparative Analysis
Rubin, Lois S.; Mott, David E. W. – 1983
Three different item-bank maintenance procedures were compared for the purpose of investigating their cumulative effects on a series of longitudinal equatings of tests measuring the same characteristic. Item data used resulted from Rasch one-parameter latent trait test calibrations. Comparison of the results in the various final banks were made.…
Descriptors: Comparative Analysis, Difficulty Level, Equated Scores, Grade 10
Garrido, Mariquita; Payne, David A. – 1987
Minimum competency cut-off scores on a statistics exam were estimated under four conditions: the Angoff judging method with item data (n=20), and without data available (n=19); and the Modified Angoff method with (n=19), and without (n=19) item data available to judges. The Angoff method required free response percentage estimates (0-100) percent,…
Descriptors: Academic Standards, Comparative Analysis, Criterion Referenced Tests, Cutting Scores
van Weeren, J., Ed. – 1983
Presented in this symposium reader are nine papers, four of which deal with the theory and impact of the Rasch model on language testing and five of which discuss final examinations in secondary schools in both general and specific terms. The papers are: "Introduction to Rasch Measurement: Some Implications for Language Testing" (J. J.…
Descriptors: Adolescents, Comparative Analysis, Comparative Education, Difficulty Level