Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 9 |
Descriptor
Prediction | 17 |
Models | 11 |
Measurement Techniques | 5 |
Regression (Statistics) | 5 |
Scores | 5 |
Correlation | 4 |
Item Response Theory | 4 |
Sample Size | 4 |
Simulation | 4 |
Classification | 3 |
Comparative Analysis | 3 |
More ▼ |
Source
Educational and Psychological… | 17 |
Author
Ayers, Elizabeth | 1 |
Beauducel, André | 1 |
Bogaert, Jasper | 1 |
Cao, Pei | 1 |
Dagenais, Denyse L. | 1 |
Dawis, Rene V. | 1 |
Ferrando, Pere J. | 1 |
Frey, Andreas | 1 |
Gordon, Michael E. | 1 |
Hartig, Johannes | 1 |
Hauser, Carl | 1 |
More ▼ |
Publication Type
Journal Articles | 14 |
Reports - Research | 9 |
Reports - Evaluative | 4 |
Reports - Descriptive | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Elementary Secondary Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Beauducel, André; Hilger, Norbert – Educational and Psychological Measurement, 2022
In the context of Bayesian factor analysis, it is possible to compute plausible values, which might be used as covariates or predictors or to provide individual scores for the Bayesian latent variables. Previous simulation studies ascertained the validity of mean plausible values by the mean squared difference of the mean plausible values and the…
Descriptors: Bayesian Statistics, Factor Analysis, Prediction, Simulation
Liu, Xiaoling; Cao, Pei; Lai, Xinzhen; Wen, Jianbing; Yang, Yanyun – Educational and Psychological Measurement, 2023
Percentage of uncontaminated correlations (PUC), explained common variance (ECV), and omega hierarchical ([omega]H) have been used to assess the degree to which a scale is essentially unidimensional and to predict structural coefficient bias when a unidimensional measurement model is fit to multidimensional data. The usefulness of these indices…
Descriptors: Correlation, Measurement Techniques, Prediction, Regression (Statistics)
Jang, Yoona; Hong, Sehee – Educational and Psychological Measurement, 2023
The purpose of this study was to evaluate the degree of classification quality in the basic latent class model when covariates are either included or are not included in the model. To accomplish this task, Monte Carlo simulations were conducted in which the results of models with and without a covariate were compared. Based on these simulations,…
Descriptors: Classification, Models, Prediction, Sample Size
Miyazaki, Yasuo; Kamata, Akihito; Uekawa, Kazuaki; Sun, Yizhi – Educational and Psychological Measurement, 2022
This paper investigated consequences of measurement error in the pretest on the estimate of the treatment effect in a pretest-posttest design with the analysis of covariance (ANCOVA) model, focusing on both the direction and magnitude of its bias. Some prior studies have examined the magnitude of the bias due to measurement error and suggested…
Descriptors: Error of Measurement, Pretesting, Pretests Posttests, Statistical Bias
Bogaert, Jasper; Loh, Wen Wei; Rosseel, Yves – Educational and Psychological Measurement, 2023
Factor score regression (FSR) is widely used as a convenient alternative to traditional structural equation modeling (SEM) for assessing structural relations between latent variables. But when latent variables are simply replaced by factor scores, biases in the structural parameter estimates often have to be corrected, due to the measurement error…
Descriptors: Factor Analysis, Regression (Statistics), Structural Equation Models, Error of Measurement
Hauser, Carl; Thum, Yeow Meng; He, Wei; Ma, Lingling – Educational and Psychological Measurement, 2015
When conducting item reviews, analysts evaluate an array of statistical and graphical information to assess the fit of a field test (FT) item to an item response theory model. The process can be tedious, particularly when the number of human reviews (HR) to be completed is large. Furthermore, such a process leads to decisions that are susceptible…
Descriptors: Test Items, Item Response Theory, Research Methodology, Decision Making
Hartig, Johannes; Frey, Andreas; Nold, Gunter; Klieme, Eckhard – Educational and Psychological Measurement, 2012
The article compares three different methods to estimate effects of task characteristics and to use these estimates for model-based proficiency scaling: prediction of item difficulties from the Rasch model, the linear logistic test model (LLTM), and an LLTM including random item effects (LLTM+e). The methods are applied to empirical data from a…
Descriptors: Item Response Theory, Models, Methods, Computation
Knofczynski, Gregory T.; Mundfrom, Daniel – Educational and Psychological Measurement, 2008
When using multiple regression for prediction purposes, the issue of minimum required sample size often needs to be addressed. Using a Monte Carlo simulation, models with varying numbers of independent variables were examined and minimum sample sizes were determined for multiple scenarios at each number of independent variables. The scenarios…
Descriptors: Sample Size, Monte Carlo Methods, Predictor Variables, Prediction
Ayers, Elizabeth; Junker, Brian – Educational and Psychological Measurement, 2008
Interest in end-of-year accountability exams has increased dramatically since the passing of the No Child Left Behind Act in 2001. With this increased interest comes a desire to use student data collected throughout the year to estimate student proficiency and predict how well they will perform on end-of-year exams. This article uses student…
Descriptors: Federal Legislation, Tests, Scores, Academic Achievement

Shields, W. S. – Educational and Psychological Measurement, 1974
A procedure for item analysis using distance clustering is described. Items are grouped according to the predominant factors measured, regardless of what they are. The procedure provides an efficient method of treating unanswered items. (Author/RC)
Descriptors: Algorithms, Cluster Grouping, Item Analysis, Models

Sturman, Michael C. – Educational and Psychological Measurement, 1999
Compares eight models for analyzing count data through simulation in the context of prediction of absenteeism to indicate the extent to which each model produces false positives. Results suggest that ordinary least-squares regression does not produce more false positives than expected by chance. The Tobit and Poisson models do yield too many false…
Descriptors: Attendance, Individual Differences, Least Squares Statistics, Models

Huberty, Carl J.; Lowman, Laureen L. – Educational and Psychological Measurement, 1997
Predictive discriminant analysis and descriptive discriminant analysis are described, and the use of three popular statistical packages to obtain computational results for each type of discriminant analysis is reviewed. Results from two Biomedical Computer Program (BMDP), four Statistical Analysis System, and two Statistical Package for the Social…
Descriptors: Computer Software, Discriminant Analysis, Mathematical Models, Prediction

Ferrando, Pere J.; Lorenzo-Seva, Urbano – Educational and Psychological Measurement, 2001
Describes a Windows program for checking the suitability of unidimensional logistic item response models for binary and ordered polytomous responses with respect to a given set of data. The program is based on predicting the observed test score distributions from the item characteristic curves. (SLD)
Descriptors: Computer Software, Item Response Theory, Mathematical Models, Prediction

Whitely, Susan E.; Dawis, Rene V. – Educational and Psychological Measurement, 1975
Descriptors: Ability, Aptitude, Measurement Techniques, Models

Pryor, Norman M.; Gordon, Michael E. – Educational and Psychological Measurement, 1974
Descriptors: Analysis of Variance, Courses, Educational Policy, Grade Point Average
Previous Page | Next Page »
Pages: 1 | 2