Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 6 |
Descriptor
Models | 11 |
Prediction | 11 |
Measurement Techniques | 4 |
Classification | 3 |
Comparative Analysis | 3 |
Item Response Theory | 3 |
Ability | 2 |
Correlation | 2 |
Evaluation Methods | 2 |
Individual Differences | 2 |
Monte Carlo Methods | 2 |
More ▼ |
Source
Educational and Psychological… | 11 |
Author
Ayers, Elizabeth | 1 |
Cao, Pei | 1 |
Dawis, Rene V. | 1 |
Frey, Andreas | 1 |
Gordon, Michael E. | 1 |
Hartig, Johannes | 1 |
Hauser, Carl | 1 |
He, Wei | 1 |
Hong, Sehee | 1 |
Jang, Yoona | 1 |
Junker, Brian | 1 |
More ▼ |
Publication Type
Journal Articles | 8 |
Reports - Research | 5 |
Reports - Evaluative | 3 |
Speeches/Meeting Papers | 1 |
Education Level
Elementary Secondary Education | 1 |
Audience
Location
Germany | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Liu, Xiaoling; Cao, Pei; Lai, Xinzhen; Wen, Jianbing; Yang, Yanyun – Educational and Psychological Measurement, 2023
Percentage of uncontaminated correlations (PUC), explained common variance (ECV), and omega hierarchical ([omega]H) have been used to assess the degree to which a scale is essentially unidimensional and to predict structural coefficient bias when a unidimensional measurement model is fit to multidimensional data. The usefulness of these indices…
Descriptors: Correlation, Measurement Techniques, Prediction, Regression (Statistics)
Jang, Yoona; Hong, Sehee – Educational and Psychological Measurement, 2023
The purpose of this study was to evaluate the degree of classification quality in the basic latent class model when covariates are either included or are not included in the model. To accomplish this task, Monte Carlo simulations were conducted in which the results of models with and without a covariate were compared. Based on these simulations,…
Descriptors: Classification, Models, Prediction, Sample Size
Hauser, Carl; Thum, Yeow Meng; He, Wei; Ma, Lingling – Educational and Psychological Measurement, 2015
When conducting item reviews, analysts evaluate an array of statistical and graphical information to assess the fit of a field test (FT) item to an item response theory model. The process can be tedious, particularly when the number of human reviews (HR) to be completed is large. Furthermore, such a process leads to decisions that are susceptible…
Descriptors: Test Items, Item Response Theory, Research Methodology, Decision Making
Hartig, Johannes; Frey, Andreas; Nold, Gunter; Klieme, Eckhard – Educational and Psychological Measurement, 2012
The article compares three different methods to estimate effects of task characteristics and to use these estimates for model-based proficiency scaling: prediction of item difficulties from the Rasch model, the linear logistic test model (LLTM), and an LLTM including random item effects (LLTM+e). The methods are applied to empirical data from a…
Descriptors: Item Response Theory, Models, Methods, Computation
Knofczynski, Gregory T.; Mundfrom, Daniel – Educational and Psychological Measurement, 2008
When using multiple regression for prediction purposes, the issue of minimum required sample size often needs to be addressed. Using a Monte Carlo simulation, models with varying numbers of independent variables were examined and minimum sample sizes were determined for multiple scenarios at each number of independent variables. The scenarios…
Descriptors: Sample Size, Monte Carlo Methods, Predictor Variables, Prediction
Ayers, Elizabeth; Junker, Brian – Educational and Psychological Measurement, 2008
Interest in end-of-year accountability exams has increased dramatically since the passing of the No Child Left Behind Act in 2001. With this increased interest comes a desire to use student data collected throughout the year to estimate student proficiency and predict how well they will perform on end-of-year exams. This article uses student…
Descriptors: Federal Legislation, Tests, Scores, Academic Achievement

Shields, W. S. – Educational and Psychological Measurement, 1974
A procedure for item analysis using distance clustering is described. Items are grouped according to the predominant factors measured, regardless of what they are. The procedure provides an efficient method of treating unanswered items. (Author/RC)
Descriptors: Algorithms, Cluster Grouping, Item Analysis, Models

Sturman, Michael C. – Educational and Psychological Measurement, 1999
Compares eight models for analyzing count data through simulation in the context of prediction of absenteeism to indicate the extent to which each model produces false positives. Results suggest that ordinary least-squares regression does not produce more false positives than expected by chance. The Tobit and Poisson models do yield too many false…
Descriptors: Attendance, Individual Differences, Least Squares Statistics, Models

Whitely, Susan E.; Dawis, Rene V. – Educational and Psychological Measurement, 1975
Descriptors: Ability, Aptitude, Measurement Techniques, Models

Pryor, Norman M.; Gordon, Michael E. – Educational and Psychological Measurement, 1974
Descriptors: Analysis of Variance, Courses, Educational Policy, Grade Point Average

Tan, E. S.; And Others – Educational and Psychological Measurement, 1995
An optimal unbiased classification rule is proposed based on a longitudinal model for the measurement of change in ability. In general, the rule predicts future level of knowledge by using information about level of knowledge at entrance, its rate of growth, and the amount of within-individual variation. (SLD)
Descriptors: Ability, Change, Classification, Individual Differences