Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 6 |
Descriptor
Ability | 15 |
Statistical Distributions | 15 |
Item Response Theory | 9 |
Test Items | 5 |
Comparative Analysis | 3 |
Computation | 3 |
Mathematical Models | 3 |
Models | 3 |
Simulation | 3 |
Achievement Tests | 2 |
Bayesian Statistics | 2 |
More ▼ |
Source
Author
Karadavut, Tugba | 2 |
Ablard, Karen E. | 1 |
Cai, Li | 1 |
Camilli, Gregory | 1 |
Chi, Eunlim | 1 |
Cohen, Allan S. | 1 |
Dogan, C. Deha | 1 |
Feinberg, Richard A. | 1 |
Feldt, Leonard S. | 1 |
Hillerich, Robert L. | 1 |
Kim, Seock-Ho | 1 |
More ▼ |
Publication Type
Journal Articles | 15 |
Reports - Research | 7 |
Reports - Evaluative | 6 |
Reports - Descriptive | 2 |
Speeches/Meeting Papers | 1 |
Education Level
Secondary Education | 2 |
Elementary Education | 1 |
Grade 7 | 1 |
Higher Education | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Postsecondary Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 1 |
What Works Clearinghouse Rating
Feinberg, Richard A.; von Davier, Matthias – Journal of Educational and Behavioral Statistics, 2020
The literature showing that subscores fail to add value is vast; yet despite their typical redundancy and the frequent presence of substantial statistical errors, many stakeholders remain convinced of their necessity. This article describes a method for identifying and reporting unexpectedly high or low subscores by comparing each examinee's…
Descriptors: Scores, Probability, Statistical Distributions, Ability
Karadavut, Tugba – International Journal of Assessment Tools in Education, 2019
Item Response Theory (IRT) models traditionally assume a normal distribution for ability. Although normality is often a reasonable assumption for ability, it is rarely met for observed scores in educational and psychological measurement. Assumptions regarding ability distribution were previously shown to have an effect on IRT parameter estimation.…
Descriptors: Item Response Theory, Computation, Bayesian Statistics, Ability
Köse, Alper; Dogan, C. Deha – International Journal of Evaluation and Research in Education, 2019
The aim of this study was to examine the precision of item parameter estimation in different sample sizes and test lengths under three parameter logistic model (3PL) item response theory (IRT) model, where the trait measured by a test was not normally distributed or had a skewed distribution. In the study, number of categories (1-0), and item…
Descriptors: Statistical Bias, Item Response Theory, Simulation, Accuracy
Karadavut, Tugba; Cohen, Allan S.; Kim, Seock-Ho – Measurement: Interdisciplinary Research and Perspectives, 2020
Mixture Rasch (MixRasch) models conventionally assume normal distributions for latent ability. Previous research has shown that the assumption of normality is often unmet in educational and psychological measurement. When normality is assumed, asymmetry in the actual latent ability distribution has been shown to result in extraction of spurious…
Descriptors: Item Response Theory, Ability, Statistical Distributions, Sample Size
Soria, Krista M.; Stubblefield, Robin – Journal of College Student Development, 2015
Strengths-based approaches are flourishing across hundreds of higher education institutions as student affairs practitioners and educators seek to leverage students' natural talents so they can reach "previously unattained levels of personal excellence" (Lopez & Louis, 2009, p. 2). Even amid the growth of strengths-based approaches…
Descriptors: College Freshmen, Academic Persistence, Correlation, Online Surveys
Paek, Insu; Park, Hyun-Jeong; Cai, Li; Chi, Eunlim – Educational and Psychological Measurement, 2014
Typically a longitudinal growth modeling based on item response theory (IRT) requires repeated measures data from a single group with the same test design. If operational or item exposure problems are present, the same test may not be employed to collect data for longitudinal analyses and tests at multiple time points are constructed with unique…
Descriptors: Item Response Theory, Comparative Analysis, Test Items, Equated Scores

Nandakumar, Ratna; Yu, Feng – Journal of Educational Measurement, 1996
DIMTEST is a nonparametric statistical test procedure for assessing unidimensionality of binary item response data that uses the T-statistic of W. F. Stout (1987). This study investigates the performance of the T-statistic with respect to different shapes of ability distributions and confirms its nonparametric nature. (SLD)
Descriptors: Ability, Nonparametric Statistics, Statistical Distributions, Validity

Rudner, Lawrence M. – Practical Assessment, Research & Evaluation, 2001
Provides and illustrates a method to compute the expected number of misclassifications of examinees using three-parameter item response theory and two state classifications (mastery or nonmastery). The method uses the standard error and the expected examinee ability distribution. (SLD)
Descriptors: Ability, Classification, Computation, Error of Measurement

Tate, Richard L.; King, F. J. – Journal of Educational Measurement, 1994
The precision of the group-based item-response theory (IRT) model applied to school ability estimation is described, assuming use of Bayesian estimation with precision represented by the standard deviation of the posterior distribution. Similarities with and differences between the school-based model and the individual-level IRT are explored. (SLD)
Descriptors: Ability, Bayesian Statistics, Estimation (Mathematics), Item Response Theory
Hillerich, Robert L. – Principal, 1990
Defines grade level as an age grouping that yields an achievement distribution approximating a normal curve, with the distribution average at grade level in a typical school. With an average teacher, an average child gains a year. Educators must accept this normal range of reading achievement and adjust instruction to it. Includes eight…
Descriptors: Ability, Age Grade Placement, Elementary Education, Individual Differences

Feldt, Leonard S. – Applied Measurement in Education, 1993
The recommendation that the reliability of multiple-choice tests will be enhanced if the distribution of item difficulties is concentrated at approximately 0.50 is reinforced and extended in this article by viewing the 0/1 item scoring as a dichotomization of an underlying normally distributed ability score. (SLD)
Descriptors: Ability, Difficulty Level, Guessing (Tests), Mathematical Models

Seong, Tae-Je – Applied Psychological Measurement, 1990
The sensitivity of marginal maximum likelihood estimation of item and ability (theta) parameters was examined when prior ability distributions were not matched to underlying ability distributions. Thirty sets of 45-item test data were generated. Conditions affecting the accuracy of estimation are discussed. (SLD)
Descriptors: Ability, Computer Simulation, Equations (Mathematics), Estimation (Mathematics)

Smith, Richard M. – Educational and Psychological Measurement, 1994
Simulated data are used to assess the appropriateness of using separate calibration and between-fit approaches to detecting item bias in the Rasch rating scale model. Results indicate that Type I error rates for the null distribution hold even when there are different ability levels for reference and focal groups. (SLD)
Descriptors: Ability, Goodness of Fit, Identification, Item Bias

Ablard, Karen E.; Mills, Carol J. – Journal of Youth and Adolescence, 1996
Beliefs of 153 academically talented students in grades 3 through 11 about the stability of intelligence paralleled a normal distribution. About half had easily modified views, and 9% were at risk of underachievement based on self-perceptions of low ability and the belief that intelligence is stable. (SLD)
Descriptors: Ability, Academically Gifted, Adolescents, Beliefs

Camilli, Gregory – Applied Psychological Measurement, 1992
A mathematical model is proposed to describe how group differences in distributions of abilities, which are distinct from the target ability, influence the probability of a correct item response. In the multidimensional approach, differential item functioning is considered a function of the educational histories of the examinees. (SLD)
Descriptors: Ability, Comparative Analysis, Equations (Mathematics), Factor Analysis