NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)4
Since 2006 (last 20 years)7
Audience
Researchers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 38 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Feinberg, Richard A.; von Davier, Matthias – Journal of Educational and Behavioral Statistics, 2020
The literature showing that subscores fail to add value is vast; yet despite their typical redundancy and the frequent presence of substantial statistical errors, many stakeholders remain convinced of their necessity. This article describes a method for identifying and reporting unexpectedly high or low subscores by comparing each examinee's…
Descriptors: Scores, Probability, Statistical Distributions, Ability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Karadavut, Tugba – International Journal of Assessment Tools in Education, 2019
Item Response Theory (IRT) models traditionally assume a normal distribution for ability. Although normality is often a reasonable assumption for ability, it is rarely met for observed scores in educational and psychological measurement. Assumptions regarding ability distribution were previously shown to have an effect on IRT parameter estimation.…
Descriptors: Item Response Theory, Computation, Bayesian Statistics, Ability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Köse, Alper; Dogan, C. Deha – International Journal of Evaluation and Research in Education, 2019
The aim of this study was to examine the precision of item parameter estimation in different sample sizes and test lengths under three parameter logistic model (3PL) item response theory (IRT) model, where the trait measured by a test was not normally distributed or had a skewed distribution. In the study, number of categories (1-0), and item…
Descriptors: Statistical Bias, Item Response Theory, Simulation, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Karadavut, Tugba; Cohen, Allan S.; Kim, Seock-Ho – Measurement: Interdisciplinary Research and Perspectives, 2020
Mixture Rasch (MixRasch) models conventionally assume normal distributions for latent ability. Previous research has shown that the assumption of normality is often unmet in educational and psychological measurement. When normality is assumed, asymmetry in the actual latent ability distribution has been shown to result in extraction of spurious…
Descriptors: Item Response Theory, Ability, Statistical Distributions, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Soria, Krista M.; Stubblefield, Robin – Journal of College Student Development, 2015
Strengths-based approaches are flourishing across hundreds of higher education institutions as student affairs practitioners and educators seek to leverage students' natural talents so they can reach "previously unattained levels of personal excellence" (Lopez & Louis, 2009, p. 2). Even amid the growth of strengths-based approaches…
Descriptors: College Freshmen, Academic Persistence, Correlation, Online Surveys
MacDonald, George T. – ProQuest LLC, 2014
A simulation study was conducted to explore the performance of the linear logistic test model (LLTM) when the relationships between items and cognitive components were misspecified. Factors manipulated included percent of misspecification (0%, 1%, 5%, 10%, and 15%), form of misspecification (under-specification, balanced misspecification, and…
Descriptors: Simulation, Item Response Theory, Models, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Paek, Insu; Park, Hyun-Jeong; Cai, Li; Chi, Eunlim – Educational and Psychological Measurement, 2014
Typically a longitudinal growth modeling based on item response theory (IRT) requires repeated measures data from a single group with the same test design. If operational or item exposure problems are present, the same test may not be employed to collect data for longitudinal analyses and tests at multiple time points are constructed with unique…
Descriptors: Item Response Theory, Comparative Analysis, Test Items, Equated Scores
Peer reviewed Peer reviewed
Nandakumar, Ratna; Yu, Feng – Journal of Educational Measurement, 1996
DIMTEST is a nonparametric statistical test procedure for assessing unidimensionality of binary item response data that uses the T-statistic of W. F. Stout (1987). This study investigates the performance of the T-statistic with respect to different shapes of ability distributions and confirms its nonparametric nature. (SLD)
Descriptors: Ability, Nonparametric Statistics, Statistical Distributions, Validity
Monahan, Patrick – 2000
Previous studies that investigated the effect of unequal ability distributions on the Type I error (TIE) of the Mantel-Haenszel chi-square test for detecting differential item functioning (DIF) simulated ability distributions that differed only in means. This simulation study suggests that the magnitude of TIE inflation is increased, and the type…
Descriptors: Ability, Chi Square, Item Bias, Simulation
Oshima, T. C.; Davey, T. C. – 1994
This paper evaluated multidimensional linking procedures with which multidimensional test data from two separate calibrations were put on a common scale. Data were simulated with known ability distributions varying on two factors which made linking necessary: mean vector differences and variance-covariance (v-c) matrix differences. After the…
Descriptors: Ability, Estimation (Mathematics), Evaluation Methods, Matrices
Clauser, Brian; And Others – 1992
Previous research examining the effects of reducing the number of score groups used in the matching criterion of the Mantel-Haenszel procedure, when screening for differential item functioning, has produced ambiguous results. The goal of this study was to resolve the ambiguity by examining the problem with a simulated data set. The main results…
Descriptors: Ability, Comparative Analysis, Computer Simulation, Item Bias
Nandakumar, Ratna; Junker, Brian W. – 1993
In many large-scale educational assessments it is of interest to compare the distribution of latent abilities of different subpopulations, and track these distributions over time to monitor educational progress. B. Junker, together with two colleagues, has developed a simple scheme, based on the proportion correct score, for smoothly approximating…
Descriptors: Ability, Elementary Secondary Education, Estimation (Mathematics), Mathematical Models
Peer reviewed Peer reviewed
Rudner, Lawrence M. – Practical Assessment, Research & Evaluation, 2001
Provides and illustrates a method to compute the expected number of misclassifications of examinees using three-parameter item response theory and two state classifications (mastery or nonmastery). The method uses the standard error and the expected examinee ability distribution. (SLD)
Descriptors: Ability, Classification, Computation, Error of Measurement
PDF pending restoration PDF pending restoration
Monaco, Malina – 1997
The effects of skewed theta distributions on indices of differential item functioning (DIF) were studied, comparing Mantel Haenszel (N. Mantel and W. Haenszel, 1959) and DFIT (N. S. Raju, W. J. van der Linden, and P. F. Fleer) (noncompensatory DIF). The significance of the study is that in educational and psychological data, the distributions one…
Descriptors: Ability, Estimation (Mathematics), Item Bias, Monte Carlo Methods
Yamamoto, Kentaro; Muraki, Eiji – 1991
The extent to which properties of the ability scale and the form of the latent trait distribution influence the estimated item parameters of item response theory (IRT) was investigated using real and simulated data. Simulated data included 5,000 ability values randomly drawn from the standard normal distribution. Real data included the results for…
Descriptors: Ability, Estimation (Mathematics), Graphs, Item Response Theory
Previous Page | Next Page »
Pages: 1  |  2  |  3