NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…72
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 72 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Monica Casella; Pasquale Dolce; Michela Ponticorvo; Nicola Milano; Davide Marocco – Educational and Psychological Measurement, 2024
Short-form development is an important topic in psychometric research, which requires researchers to face methodological choices at different steps. The statistical techniques traditionally used for shortening tests, which belong to the so-called exploratory model, make assumptions not always verified in psychological data. This article proposes a…
Descriptors: Artificial Intelligence, Test Construction, Test Format, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Xiaoling; Cao, Pei; Lai, Xinzhen; Wen, Jianbing; Yang, Yanyun – Educational and Psychological Measurement, 2023
Percentage of uncontaminated correlations (PUC), explained common variance (ECV), and omega hierarchical ([omega]H) have been used to assess the degree to which a scale is essentially unidimensional and to predict structural coefficient bias when a unidimensional measurement model is fit to multidimensional data. The usefulness of these indices…
Descriptors: Correlation, Measurement Techniques, Prediction, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Sanaz Nazari; Walter L. Leite; A. Corinne Huggins-Manley – Educational and Psychological Measurement, 2024
Social desirability bias (SDB) is a common threat to the validity of conclusions from responses to a scale or survey. There is a wide range of person-fit statistics in the literature that can be employed to detect SDB. In addition, machine learning classifiers, such as logistic regression and random forest, have the potential to distinguish…
Descriptors: Social Desirability, Bias, Artificial Intelligence, Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Mangino, Anthony A.; Finch, W. Holmes – Educational and Psychological Measurement, 2021
Oftentimes in many fields of the social and natural sciences, data are obtained within a nested structure (e.g., students within schools). To effectively analyze data with such a structure, multilevel models are frequently employed. The present study utilizes a Monte Carlo simulation to compare several novel multilevel classification algorithms…
Descriptors: Prediction, Hierarchical Linear Modeling, Classification, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Goretzko, David – Educational and Psychological Measurement, 2022
Determining the number of factors in exploratory factor analysis is arguably the most crucial decision a researcher faces when conducting the analysis. While several simulation studies exist that compare various so-called factor retention criteria under different data conditions, little is known about the impact of missing data on this process.…
Descriptors: Factor Analysis, Research Problems, Data, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Jang, Yoona; Hong, Sehee – Educational and Psychological Measurement, 2023
The purpose of this study was to evaluate the degree of classification quality in the basic latent class model when covariates are either included or are not included in the model. To accomplish this task, Monte Carlo simulations were conducted in which the results of models with and without a covariate were compared. Based on these simulations,…
Descriptors: Classification, Models, Prediction, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Mangino, Anthony A.; Bolin, Jocelyn H.; Finch, W. Holmes – Educational and Psychological Measurement, 2023
This study seeks to compare fixed and mixed effects models for the purposes of predictive classification in the presence of multilevel data. The first part of the study utilizes a Monte Carlo simulation to compare fixed and mixed effects logistic regression and random forests. An applied examination of the prediction of student retention in the…
Descriptors: Prediction, Classification, Monte Carlo Methods, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Miyazaki, Yasuo; Kamata, Akihito; Uekawa, Kazuaki; Sun, Yizhi – Educational and Psychological Measurement, 2022
This paper investigated consequences of measurement error in the pretest on the estimate of the treatment effect in a pretest-posttest design with the analysis of covariance (ANCOVA) model, focusing on both the direction and magnitude of its bias. Some prior studies have examined the magnitude of the bias due to measurement error and suggested…
Descriptors: Error of Measurement, Pretesting, Pretests Posttests, Statistical Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Beauducel, André; Hilger, Norbert – Educational and Psychological Measurement, 2022
In the context of Bayesian factor analysis, it is possible to compute plausible values, which might be used as covariates or predictors or to provide individual scores for the Bayesian latent variables. Previous simulation studies ascertained the validity of mean plausible values by the mean squared difference of the mean plausible values and the…
Descriptors: Bayesian Statistics, Factor Analysis, Prediction, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Bogaert, Jasper; Loh, Wen Wei; Rosseel, Yves – Educational and Psychological Measurement, 2023
Factor score regression (FSR) is widely used as a convenient alternative to traditional structural equation modeling (SEM) for assessing structural relations between latent variables. But when latent variables are simply replaced by factor scores, biases in the structural parameter estimates often have to be corrected, due to the measurement error…
Descriptors: Factor Analysis, Regression (Statistics), Structural Equation Models, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G. – Educational and Psychological Measurement, 2017
The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…
Descriptors: Testing, Performance, Prediction, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Campitelli, Guillermo; Macbeth, Guillermo; Ospina, Raydonal; Marmolejo-Ramos, Fernando – Educational and Psychological Measurement, 2017
We present three strategies to replace the null hypothesis statistical significance testing approach in psychological research: (1) visual representation of cognitive processes and predictions, (2) visual representation of data distributions and choice of the appropriate distribution for analysis, and (3) model comparison. The three strategies…
Descriptors: Research Methodology, Hypothesis Testing, Psychology, Social Science Research
Peer reviewed Peer reviewed
Direct linkDirect link
Hauser, Carl; Thum, Yeow Meng; He, Wei; Ma, Lingling – Educational and Psychological Measurement, 2015
When conducting item reviews, analysts evaluate an array of statistical and graphical information to assess the fit of a field test (FT) item to an item response theory model. The process can be tedious, particularly when the number of human reviews (HR) to be completed is large. Furthermore, such a process leads to decisions that are susceptible…
Descriptors: Test Items, Item Response Theory, Research Methodology, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Xu, Ting; Stone, Clement A. – Educational and Psychological Measurement, 2012
It has been argued that item response theory trait estimates should be used in analyses rather than number right (NR) or summated scale (SS) scores. Thissen and Orlando postulated that IRT scaling tends to produce trait estimates that are linearly related to the underlying trait being measured. Therefore, IRT trait estimates can be more useful…
Descriptors: Educational Research, Monte Carlo Methods, Measures (Individuals), Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Paulhus, Delroy L.; Dubois, Patrick J. – Educational and Psychological Measurement, 2014
The overclaiming technique is a novel assessment procedure that uses signal detection analysis to generate indices of knowledge accuracy (OC-accuracy) and self-enhancement (OC-bias). The technique has previously shown robustness over varied knowledge domains as well as low reactivity across administration contexts. Here we compared the OC-accuracy…
Descriptors: Educational Assessment, Knowledge Level, Accuracy, Cognitive Ability
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5