NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)0
Since 2006 (last 20 years)1
Source
Journal of Educational…33
Audience
Researchers2
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 33 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Zwick, Rebecca; Himelfarb, Igor – Journal of Educational Measurement, 2011
Research has often found that, when high school grades and SAT scores are used to predict first-year college grade-point average (FGPA) via regression analysis, African-American and Latino students, are, on average, predicted to earn higher FGPAs than they actually do. Under various plausible models, this phenomenon can be explained in terms of…
Descriptors: Socioeconomic Status, Grades (Scholastic), Error of Measurement, White Students
Peer reviewed Peer reviewed
Raffeld, Paul – Journal of Educational Measurement, 1975
Results support the contention that a Guttman-weighted objective test can have psychometric properties that are superior to those of its unweighted counterpart, as long as omissions do not exist or are assigned a value equal to the mean of the k item alternative weights. (Author/BJG)
Descriptors: Multiple Choice Tests, Predictive Validity, Test Reliability, Test Validity
Peer reviewed Peer reviewed
Millman, Jason; Popham, W. James – Journal of Educational Measurement, 1974
The use of the regression equation derived from the Anglo-American sample to predict grades of Mexican-American students resulted in overprediction. An examination of the standardized regression weights revealed a significant difference in the weight given to the Scholastic Aptitude Test Mathematics Score. (Author/BB)
Descriptors: Criterion Referenced Tests, Item Analysis, Predictive Validity, Scores
Peer reviewed Peer reviewed
Allalouf, Avi; Ben-Shakhar, Gershon – Journal of Educational Measurement, 1998
Examined how coaching affects the predictive validity and fairness of scholastic aptitude tests. A coached (n=271) and uncoached (n=95) group were compared. Comparison revealed that although coaching enhanced scores on the Israeli Psychometric Entrance Test by about 25% of a standard deviation, it did not create a prediction bias or affect…
Descriptors: College Entrance Examinations, High School Students, High Schools, Higher Education
Peer reviewed Peer reviewed
Weitzman, R. A. – Journal of Educational Measurement, 1982
In a nonadversarial approach the predictive validities of the Scholastic Aptitude Test (SAT) and the high school record, the effects of the selection process on validities, and effects if colleges used a common standard of achievement were examined. Results indicate that the SAT may be a highly valid selection instrument. (Author/CM)
Descriptors: Academic Aspiration, College Admission, College Entrance Examinations, Grade Point Average
Peer reviewed Peer reviewed
Levin, Henry M. – Journal of Educational Measurement, 1991
This book reports results of a National Research Council study and the validity of generalizing from its studies of about 500 occupations to over 12,000 jobs. The use of the General Aptitude Test Battery for prediction is discussed, and its impact and recommendations for future use are considered. (SLD)
Descriptors: Aptitude Tests, Competitive Selection, Generalization, Minority Groups
Peer reviewed Peer reviewed
Linn, Robert L.; Hastings, C. Nicholas – Journal of Educational Measurement, 1984
Using predictive validity studies of the Law School Admissions Test (LSAT) and the undergraduate grade-point average (UGPA), this study examined the large variation in the magnitude of the validity coefficients across schools. LSAT standard deviation and correlation between LSAT and UGPA accounted for 58.5 percent of the variability. (Author/EGS)
Descriptors: Academic Achievement, College Applicants, College Entrance Examinations, Grade Point Average
Peer reviewed Peer reviewed
Sawyer, Richard; Maxey, James – Journal of Educational Measurement, 1979
College freshmen's grade point averages at 260 colleges were predicted on the basis of multiple regression equations using the four previous classes separately to compute the equations. Predictor variables were four American College Test (ACT) scores and high school grades. Predictions remained accurate over the four-year period. (Author/CTM)
Descriptors: College Entrance Examinations, College Freshmen, Grade Prediction, Grades (Scholastic)
Peer reviewed Peer reviewed
Reschly, Daniel J.; Sabers, Darrell L. – Journal of Educational Measurement, 1979
Test bias, assumed as equal regression lines between two different tests for different populations was investigated to predict Metropolitan Achievement Tests from the Wechsler Intelligence Scale for Children--Revised. Subjects were 1,040 children in grades 1, 3, 5, 7, and 9: Anglo American, Black, Mexican American, and Native American Papago. (JKS)
Descriptors: Academic Achievement, Elementary Education, Intelligence Tests, Minority Group Children
Peer reviewed Peer reviewed
Patnaik, Durgadas; Traub, Ross E. – Journal of Educational Measurement, 1973
Two conventional scores and a weighted score on a group test of general intelligence were compared for reliability and predictive validity. (Editor)
Descriptors: Correlation, Intelligence Tests, Measurement, Predictive Validity
Peer reviewed Peer reviewed
Linn, Robert L. – Journal of Educational Measurement, 1983
When the precise basis of selection effect on correlation and regression equations is unknown but can be modeled by selection on a variable that is highly but not perfectly related to observed scores, the selection effects can lead to the commonly observed "overprediction" results in studies of predictive bias. (Author/PN)
Descriptors: Bias, Correlation, Higher Education, Prediction
Peer reviewed Peer reviewed
Linn, Robert L. – Journal of Educational Measurement, 1984
The common approach to studies of predictive bias is analyzed within the context of a conceptual model in which predictors and criterion measures are viewed as fallible indicators of idealized qualifications. (Author/PN)
Descriptors: Certification, Models, Predictive Measurement, Predictive Validity
Peer reviewed Peer reviewed
Kolen, Michael J.; Whitney, Douglas R. – Journal of Educational Measurement, 1978
Nine methods of smoothing double-entry expectancy tables (tables that relate two predictor variables to probability of attaining success on a criterion) were compared using data for entering students at 85 colleges and universities. The smoothed tables were more accurate than those based on observed relative frequencies. (Author/CTM)
Descriptors: College Entrance Examinations, Expectancy Tables, Grade Prediction, High Schools
Peer reviewed Peer reviewed
Tallmadge, G. Kasten – Journal of Educational Measurement, 1985
Support for the validity of the equipercentile assumption is presented in contrast with the conclusion of Powers, Slaughter, and Helmick (EJ 289 091). Observed "gains" from pre- to posttests are better attributed to stakeholder bias, posttests that match curriculum content too closely, or a combination of these factors. (Author/DWH)
Descriptors: Data Interpretation, Evaluation Methods, Norm Referenced Tests, Predictive Measurement
Peer reviewed Peer reviewed
Temp, George – Journal of Educational Measurement, 1971
Results indicate that a common prediction system is not practical and that a separate prediction system should be developed for each subgroup. (AG)
Descriptors: Black Students, College Desegregation, Grade Point Average, Predictive Measurement
Previous Page | Next Page ยป
Pages: 1  |  2  |  3