Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 4 |
Descriptor
Source
Applied Psychological… | 15 |
Author
Woods, Carol M. | 2 |
Aguinis, Herman | 1 |
Campbell, John B. | 1 |
Chun, Ki-Taek | 1 |
Claudy, John G. | 1 |
DeBoeck, Paul | 1 |
DeSarbo, Wayne S. | 1 |
Drasgow, Fritz | 1 |
Griffeth, Rodger W. | 1 |
Hollman, Frances Galliano | 1 |
Johanson, George A. | 1 |
More ▼ |
Publication Type
Journal Articles | 12 |
Reports - Research | 9 |
Reports - Evaluative | 2 |
Reports - Descriptive | 1 |
Education Level
Audience
Practitioners | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
California Psychological… | 1 |
Graduate Record Examinations | 1 |
Sixteen Personality Factor… | 1 |
What Works Clearinghouse Rating
Morse, Brendan J.; Johanson, George A.; Griffeth, Rodger W. – Applied Psychological Measurement, 2012
Recent simulation research has demonstrated that using simple raw score to operationalize a latent construct can result in inflated Type I error rates for the interaction term of a moderated statistical model when the interaction (or lack thereof) is proposed at the latent variable level. Rescaling the scores using an appropriate item response…
Descriptors: Item Response Theory, Multiple Regression Analysis, Error of Measurement, Models
Woods, Carol M. – Applied Psychological Measurement, 2011
Differential item functioning (DIF) occurs when an item on a test, questionnaire, or interview has different measurement properties for one group of people versus another, irrespective of true group-mean differences on the constructs being measured. This article is focused on item response theory based likelihood ratio testing for DIF (IRT-LR or…
Descriptors: Simulation, Item Response Theory, Testing, Questionnaires
Woods, Carol M. – Applied Psychological Measurement, 2009
Differential item functioning (DIF) occurs when items on a test or questionnaire have different measurement properties for one group of people versus another, irrespective of group-mean differences on the construct. Methods for testing DIF require matching members of different groups on an estimate of the construct. Preferably, the estimate is…
Descriptors: Test Results, Testing, Item Response Theory, Test Bias
Aguinis, Herman; Pierce, Charles A. – Applied Psychological Measurement, 2006
The computation and reporting of effect size estimates is becoming the norm in many journals in psychology and related disciplines. Despite the increased importance of effect sizes, researchers may not report them or may report inaccurate values because of a lack of appropriate computational tools. For instance, Pierce, Block, and Aguinis (2004)…
Descriptors: Effect Size, Multiple Regression Analysis, Predictor Variables, Error of Measurement

Campbell, John B.; Chun, Ki-Taek – Applied Psychological Measurement, 1977
A multiple regression approach is used to assess the feasibility of reciprocal prediction between the Sixteen Personality Factor Questionnaire scales and the California Psychological Inventory scales (i.e., the prediction of each 16PF scale from the CPI scales and of each CPI scale from the 16PF scales). (RC)
Descriptors: Correlation, Multiple Regression Analysis, Personality Measures, Prediction

Kennedy, Eugene – Applied Psychological Measurement, 1988
A Monte Carlo study was conducted to examine the performance of several strategies for estimating the squared cross-validity coefficient of a sample regression equation in the context of best subset regression. Results concerning sample size effects and the validity of estimates are discussed. (TJH)
Descriptors: Estimation (Mathematics), Monte Carlo Methods, Multiple Regression Analysis, Predictive Validity

Claudy, John G. – Applied Psychological Measurement, 1979
Equations for estimating the value of the multiple correlation coefficient in the population underlying a sample and the value of the population validity coefficient of a sample regression equation were investigated. Results indicated that cross-validation may no longer be necessary for certain purposes. (Author/MH)
Descriptors: Correlation, Mathematical Formulas, Multiple Regression Analysis, Predictor Variables

MacCallum, Robert C.; And Others – Applied Psychological Measurement, 1979
Questions are raised concerning differences between traditional metric multiple regression, which assumes all variables to be measured on interval scales, and nonmetric multiple regression. The ordinal model is generally superior in fitting derivation samples but the metric technique fits better than the nonmetric in cross-validation samples.…
Descriptors: Comparative Analysis, Multiple Regression Analysis, Nonparametric Statistics, Personnel Evaluation

Overall, John E. – Applied Psychological Measurement, 1980
The use of general linear regression methods for the analysis of categorical data is recommended. The general linear model analysis of a 0,1 coded response variable produces estimates of the same response probabilities that might otherwise be estimated from frequencies in a multiway contingency table. (Author/CTM)
Descriptors: Adults, Alcoholism, Analysis of Variance, Employment Level

And Others; Drasgow, Fritz – Applied Psychological Measurement, 1979
A Monte Carlo experiment was used to evaluate four procedures for estimating the population squared cross-validity of a sample least squares regression equation. One estimator was particularly recommended. (Author/BH)
Descriptors: Correlation, Least Squares Statistics, Mathematical Formulas, Multiple Regression Analysis

McFatter, Robert M. – Applied Psychological Measurement, 1979
The usual interpretation of suppressor effects in a multiple regression equation assumes that the correlations among variables have been generated by a particular structural model. How such a regression equation is interpreted is shown to be dependent on the structural model deemed appropriate. (Author/JKS)
Descriptors: Correlation, Critical Path Method, Data Analysis, Models
Kang, Sun-Mee; Waller, Niels G. – Applied Psychological Measurement, 2005
Two Monte Carlo studies were conducted to explore the Type I error rates in moderated multiple regression (MMR) of observed scores and estimated latent trait scores from a two-parameter logistic item response theory (IRT) model. The results of both studies showed that MMR Type I error rates were substantially higher than the nominal alpha levels…
Descriptors: Multiple Regression Analysis, Interaction, Monte Carlo Methods, Item Response Theory
DeSarbo, Wayne S.; Lehmann, Donald R.; Hollman, Frances Galliano – Applied Psychological Measurement, 2004
Preference structures that underline a survey or experimental responses may systematically vary during the administration of such measurement. Maturation, learning, fatigue, and response strategy shifts may all affect the sequential elicitation of respondent preferences at different points in the survey or experiment. The consequence of this…
Descriptors: Maximum Likelihood Statistics, Response Style (Tests), Evaluation Methods, Responses

Schmidt, Frank L.; And Others – Applied Psychological Measurement, 1978
The present study examined and evaluated the application of linear policy-capturing models to the real-world decision task of graduate admissions. Utility of the policy-capturing models was great enough to be of practical significance, and least-squares weights showed no predictive advantage over equal weights. (Author/CTM)
Descriptors: Admission Criteria, College Admission, Grade Point Average, Graduate Study

DeBoeck, Paul – Applied Psychological Measurement, 1978
A comparison was made of responses to two types of personality inventory items, either sentence-type items or adjective-type items. Three methods of analysis were applied. It was concluded that the two item types give approximately equal results. (CTM)
Descriptors: Cluster Analysis, High Schools, Males, Multidimensional Scaling