NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Romig, John Elwood; Therrien, William J.; Lloyd, John W. – Journal of Special Education, 2017
We used meta-analysis to examine the criterion validity of four scoring procedures used in curriculum-based measurement of written language. A total of 22 articles representing 21 studies (N = 21) met the inclusion criteria. Results indicated that two scoring procedures, correct word sequences and correct minus incorrect sequences, have acceptable…
Descriptors: Meta Analysis, Curriculum Based Assessment, Written Language, Scoring Formulas
Peer reviewed Peer reviewed
Patnaik, Durgadas; Traub, Ross E. – Journal of Educational Measurement, 1973
Two conventional scores and a weighted score on a group test of general intelligence were compared for reliability and predictive validity. (Editor)
Descriptors: Correlation, Intelligence Tests, Measurement, Predictive Validity
Morris, John D. – 1978
Several advantages to the use of factor scores as independent variables in a multiple regression equation were found. To help select the most desirable type of factor score on which to calculate a regression equation, computer-based Monte Carlo methods were used to compare the predictive accuracy upon replication of regression of five…
Descriptors: Comparative Analysis, Correlation, Factor Analysis, Multiple Regression Analysis
Wilson, Kenneth M. – 1970
Evidence regarding the contribution of the various elements in a standard admissions battery to forecasts of freshman-year performance in eight College Research Center (CRC)-member colleges is presented. Particular note is made of evidence that the CEEB Achievement Average contributes substantially more than do the SAT scores to prediction of…
Descriptors: College Entrance Examinations, College Freshmen, Correlation, Educational Research
Peer reviewed Peer reviewed
Morris, John D. – American Educational Research Journal, 1979
Computer-based Monte Carlo methods compared the predictive accuracy upon replication of regression of five complete and four incomplete factor score estimation methods. Prediction on incomplete factor scores showed better double cross-validated prediction accuracy than on complete scores. The unique unit-weighted factor score was superior among…
Descriptors: Correlation, Factor Analysis, Monte Carlo Methods, Multiple Regression Analysis
Bayuk, Robert J. – 1973
An investigation was conducted to determine the effects of response-category weighting and item weighting on reliability and predictive validity. Response-category weighting refers to scoring in which, for each category (including omit and "not read"), a weight is assigned that is proportional to the mean criterion score of examinees selecting…
Descriptors: Aptitude Tests, Correlation, Predictive Validity, Research Reports
Downey, Ronald G.
Previous research has studied the effects of different methods of item option weighting on the reliability and concurrent and predictive validity of achievement tests. Increases in reliability are generally found, but with mixed results for validity. Several methods of producing option weights, (i.e., Guttman internal and external weights and…
Descriptors: Achievement Tests, Comparative Analysis, Correlation, Grade Point Average
Wilson, Kenneth M. – 1971
To determine whether it is necessary for eight College Research Center (CRC) member colleges to use 16 distinct sets of weight in combining four admissions tests scores into predictive composities, a standard set of weights were developed and tested for effectiveness in predicting Freshman Average Grade (FAG). The basic sample in which weights…
Descriptors: Academic Aptitude, College Entrance Examinations, College Freshmen, Correlation