NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…215
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 215 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Yongtian Cheng; K. V. Petrides – Educational and Psychological Measurement, 2025
Psychologists are emphasizing the importance of predictive conclusions. Machine learning methods, such as supervised neural networks, have been used in psychological studies as they naturally fit prediction tasks. However, we are concerned about whether neural networks fitted with random datasets (i.e., datasets where there is no relationship…
Descriptors: Psychological Studies, Artificial Intelligence, Cognitive Processes, Predictive Validity
Peer reviewed Peer reviewed
Direct linkDirect link
W. Holmes Finch – Educational and Psychological Measurement, 2024
Dominance analysis (DA) is a very useful tool for ordering independent variables in a regression model based on their relative importance in explaining variance in the dependent variable. This approach, which was originally described by Budescu, has recently been extended to use with structural equation models examining relationships among latent…
Descriptors: Models, Regression (Statistics), Structural Equation Models, Predictor Variables
Peer reviewed Peer reviewed
Direct linkDirect link
André Beauducel; Norbert Hilger; Tobias Kuhl – Educational and Psychological Measurement, 2024
Regression factor score predictors have the maximum factor score determinacy, that is, the maximum correlation with the corresponding factor, but they do not have the same inter-correlations as the factors. As it might be useful to compute factor score predictors that have the same inter-correlations as the factors, correlation-preserving factor…
Descriptors: Scores, Factor Analysis, Correlation, Predictor Variables
Peer reviewed Peer reviewed
Direct linkDirect link
Christine E. DeMars; Paulius Satkus – Educational and Psychological Measurement, 2024
Marginal maximum likelihood, a common estimation method for item response theory models, is not inherently a Bayesian procedure. However, due to estimation difficulties, Bayesian priors are often applied to the likelihood when estimating 3PL models, especially with small samples. Little focus has been placed on choosing the priors for marginal…
Descriptors: Item Response Theory, Statistical Distributions, Error of Measurement, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Sim, Mikyung; Kim, Su-Young; Suh, Youngsuk – Educational and Psychological Measurement, 2022
Mediation models have been widely used in many disciplines to better understand the underlying processes between independent and dependent variables. Despite their popularity and importance, the appropriate sample sizes for estimating those models are not well known. Although several approaches (such as Monte Carlo methods) exist, applied…
Descriptors: Sample Size, Statistical Analysis, Predictor Variables, Path Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Magnus, Brooke E.; Liu, Yang – Educational and Psychological Measurement, 2022
Questionnaires inquiring about psychopathology symptoms often produce data with excess zeros or the equivalent (e.g., none, never, and not at all). This type of zero inflation is especially common in nonclinical samples in which many people do not exhibit psychopathology, and if unaccounted for, can result in biased parameter estimates when…
Descriptors: Symptoms (Individual Disorders), Psychopathology, Research Methodology, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Viola Merhof; Caroline M. Böhm; Thorsten Meiser – Educational and Psychological Measurement, 2024
Item response tree (IRTree) models are a flexible framework to control self-reported trait measurements for response styles. To this end, IRTree models decompose the responses to rating items into sub-decisions, which are assumed to be made on the basis of either the trait being measured or a response style, whereby the effects of such person…
Descriptors: Item Response Theory, Test Interpretation, Test Reliability, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Jana Welling; Timo Gnambs; Claus H. Carstensen – Educational and Psychological Measurement, 2024
Disengaged responding poses a severe threat to the validity of educational large-scale assessments, because item responses from unmotivated test-takers do not reflect their actual ability. Existing identification approaches rely primarily on item response times, which bears the risk of misclassifying fast engaged or slow disengaged responses.…
Descriptors: Foreign Countries, College Students, Guessing (Tests), Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Murrah, William M. – Educational and Psychological Measurement, 2020
Multiple regression is often used to compare the importance of two or more predictors. When the predictors being compared are measured with error, the estimated coefficients can be biased and Type I error rates can be inflated. This study explores the impact of measurement error on comparing predictors when one is measured with error, followed by…
Descriptors: Error of Measurement, Statistical Bias, Multiple Regression Analysis, Predictor Variables
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Eunsook; von der Embse, Nathaniel – Educational and Psychological Measurement, 2021
Although collecting data from multiple informants is highly recommended, methods to model the congruence and incongruence between informants are limited. Bauer and colleagues suggested the trifactor model that decomposes the variances into common factor, informant perspective factors, and item-specific factors. This study extends their work to the…
Descriptors: Probability, Models, Statistical Analysis, Congruence (Psychology)
Peer reviewed Peer reviewed
Direct linkDirect link
Nagy, Gabriel; Ulitzsch, Esther – Educational and Psychological Measurement, 2022
Disengaged item responses pose a threat to the validity of the results provided by large-scale assessments. Several procedures for identifying disengaged responses on the basis of observed response times have been suggested, and item response theory (IRT) models for response engagement have been proposed. We outline that response time-based…
Descriptors: Item Response Theory, Hierarchical Linear Modeling, Predictor Variables, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Sorjonen, Kimmo; Melin, Bo; Ingre, Michael – Educational and Psychological Measurement, 2019
The present simulation study indicates that a method where the regression effect of a predictor (X) on an outcome at follow-up (Y1) is calculated while adjusting for the outcome at baseline (Y0) can give spurious findings, especially when there is a strong correlation between X and Y0 and when the test-retest correlation between Y0 and Y1 is…
Descriptors: Predictor Variables, Regression (Statistics), Correlation, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Robie, Chet; Meade, Adam W.; Risavy, Stephen D.; Rasheed, Sabah – Educational and Psychological Measurement, 2022
The effects of different response option orders on survey responses have been studied extensively. The typical research design involves examining the differences in response characteristics between conditions with the same item stems and response option orders that differ in valence--either incrementally arranged (e.g., strongly disagree to…
Descriptors: Likert Scales, Psychometrics, Surveys, Responses
Peer reviewed Peer reviewed
Direct linkDirect link
Ames, Allison J.; Myers, Aaron J. – Educational and Psychological Measurement, 2021
Contamination of responses due to extreme and midpoint response style can confound the interpretation of scores, threatening the validity of inferences made from survey responses. This study incorporated person-level covariates in the multidimensional item response tree model to explain heterogeneity in response style. We include an empirical…
Descriptors: Response Style (Tests), Item Response Theory, Longitudinal Studies, Adolescents
Peer reviewed Peer reviewed
Direct linkDirect link
No, Unkyung; Hong, Sehee – Educational and Psychological Measurement, 2018
The purpose of the present study is to compare performances of mixture modeling approaches (i.e., one-step approach, three-step maximum-likelihood approach, three-step BCH approach, and LTB approach) based on diverse sample size conditions. To carry out this research, two simulation studies were conducted with two different models, a latent class…
Descriptors: Sample Size, Classification, Comparative Analysis, Statistical Analysis
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  15