Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 8 |
Since 2006 (last 20 years) | 21 |
Descriptor
Source
Educational and Psychological… | 56 |
Author
Algina, James | 3 |
Ames, Allison J. | 1 |
André Beauducel | 1 |
Aydin, Burak | 1 |
Bagger, Jessica | 1 |
Baird, Leonard L. | 1 |
Batlis, Nick C. | 1 |
Berry, Kenneth J. | 1 |
Bobko, Philip | 1 |
Borrello, Gloria M. | 1 |
Budescu, David V. | 1 |
More ▼ |
Publication Type
Journal Articles | 48 |
Reports - Research | 37 |
Reports - Evaluative | 9 |
Guides - Non-Classroom | 1 |
Numerical/Quantitative Data | 1 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 6 |
Postsecondary Education | 3 |
Audience
Location
Canada | 1 |
New Zealand | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
André Beauducel; Norbert Hilger; Tobias Kuhl – Educational and Psychological Measurement, 2024
Regression factor score predictors have the maximum factor score determinacy, that is, the maximum correlation with the corresponding factor, but they do not have the same inter-correlations as the factors. As it might be useful to compute factor score predictors that have the same inter-correlations as the factors, correlation-preserving factor…
Descriptors: Scores, Factor Analysis, Correlation, Predictor Variables
Viola Merhof; Caroline M. Böhm; Thorsten Meiser – Educational and Psychological Measurement, 2024
Item response tree (IRTree) models are a flexible framework to control self-reported trait measurements for response styles. To this end, IRTree models decompose the responses to rating items into sub-decisions, which are assumed to be made on the basis of either the trait being measured or a response style, whereby the effects of such person…
Descriptors: Item Response Theory, Test Interpretation, Test Reliability, Test Validity
Sorjonen, Kimmo; Melin, Bo; Ingre, Michael – Educational and Psychological Measurement, 2019
The present simulation study indicates that a method where the regression effect of a predictor (X) on an outcome at follow-up (Y1) is calculated while adjusting for the outcome at baseline (Y0) can give spurious findings, especially when there is a strong correlation between X and Y0 and when the test-retest correlation between Y0 and Y1 is…
Descriptors: Predictor Variables, Regression (Statistics), Correlation, Error of Measurement
Robie, Chet; Meade, Adam W.; Risavy, Stephen D.; Rasheed, Sabah – Educational and Psychological Measurement, 2022
The effects of different response option orders on survey responses have been studied extensively. The typical research design involves examining the differences in response characteristics between conditions with the same item stems and response option orders that differ in valence--either incrementally arranged (e.g., strongly disagree to…
Descriptors: Likert Scales, Psychometrics, Surveys, Responses
Ames, Allison J.; Myers, Aaron J. – Educational and Psychological Measurement, 2021
Contamination of responses due to extreme and midpoint response style can confound the interpretation of scores, threatening the validity of inferences made from survey responses. This study incorporated person-level covariates in the multidimensional item response tree model to explain heterogeneity in response style. We include an empirical…
Descriptors: Response Style (Tests), Item Response Theory, Longitudinal Studies, Adolescents
Trafimow, David – Educational and Psychological Measurement, 2018
Because error variance alternatively can be considered to be the sum of systematic variance associated with unknown variables and randomness, a tripartite assumption is proposed that total variance in the dependent variable can be partitioned into three variance components. These are variance in the dependent variable that is explained by the…
Descriptors: Statistical Analysis, Correlation, Experiments, Effect Size
Hamby, Tyler; Taylor, Wyn – Educational and Psychological Measurement, 2016
This study examined the predictors and psychometric outcomes of survey satisficing, wherein respondents provide quick, "good enough" answers (satisficing) rather than carefully considered answers (optimizing). We administered surveys to university students and respondents--half of whom held college degrees--from a for-pay survey website,…
Descriptors: Surveys, Test Reliability, Test Validity, Comparative Analysis
Raykov, Tenko; Lee, Chun-Lung; Marcoulides, George A.; Chang, Chi – Educational and Psychological Measurement, 2013
The relationship between saturated path-analysis models and their fit to data is revisited. It is demonstrated that a saturated model need not fit perfectly or even well a given data set when fit to the raw data is examined, a criterion currently frequently overlooked by researchers utilizing path analysis modeling techniques. The potential of…
Descriptors: Structural Equation Models, Goodness of Fit, Path Analysis, Correlation
Shear, Benjamin R.; Zumbo, Bruno D. – Educational and Psychological Measurement, 2013
Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…
Descriptors: Error of Measurement, Multiple Regression Analysis, Data Analysis, Computer Simulation
Aydin, Burak; Leite, Walter L.; Algina, James – Educational and Psychological Measurement, 2016
We investigated methods of including covariates in two-level models for cluster randomized trials to increase power to detect the treatment effect. We compared multilevel models that included either an observed cluster mean or a latent cluster mean as a covariate, as well as the effect of including Level 1 deviation scores in the model. A Monte…
Descriptors: Error of Measurement, Predictor Variables, Randomized Controlled Trials, Experimental Groups
Paulhus, Delroy L.; Dubois, Patrick J. – Educational and Psychological Measurement, 2014
The overclaiming technique is a novel assessment procedure that uses signal detection analysis to generate indices of knowledge accuracy (OC-accuracy) and self-enhancement (OC-bias). The technique has previously shown robustness over varied knowledge domains as well as low reactivity across administration contexts. Here we compared the OC-accuracy…
Descriptors: Educational Assessment, Knowledge Level, Accuracy, Cognitive Ability
Kobrin, Jennifer L.; Kim, YoungKoung; Sackett, Paul R. – Educational and Psychological Measurement, 2012
There is much debate on the merits and pitfalls of standardized tests for college admission, with questions regarding the format (multiple-choice vs. constructed response), cognitive complexity, and content of these assessments (achievement vs. aptitude) at the forefront of the discussion. This study addressed these questions by investigating the…
Descriptors: Grade Point Average, Standardized Tests, Predictive Validity, Predictor Variables
Chan, Wai – Educational and Psychological Measurement, 2009
A typical question in multiple regression analysis is to determine if a set of predictors gives the same degree of predictor power in two different populations. Olkin and Finn (1995) proposed two asymptotic-based methods for testing the equality of two population squared multiple correlations, [rho][superscript 2][subscript 1] and…
Descriptors: Multiple Regression Analysis, Intervals, Correlation, Computation
Algina, James; Keselman, Harvey J.; Penfield, Randall J. – Educational and Psychological Measurement, 2008
A squared semipartial correlation coefficient ([Delta]R[superscript 2]) is the increase in the squared multiple correlation coefficient that occurs when a predictor is added to a multiple regression model. Prior research has shown that coverage probability for a confidence interval constructed by using a modified percentile bootstrap method with…
Descriptors: Intervals, Correlation, Probability, Multiple Regression Analysis
Algina, James; Keselman, H. J. – Educational and Psychological Measurement, 2008
Applications of distribution theory for the squared multiple correlation coefficient and the squared cross-validation coefficient are reviewed, and computer programs for these applications are made available. The applications include confidence intervals, hypothesis testing, and sample size selection. (Contains 2 tables.)
Descriptors: Intervals, Sample Size, Validity, Hypothesis Testing