NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)1
Since 2006 (last 20 years)1
Source
Journal of Educational…36
Audience
Researchers2
Location
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 36 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Mislevy, Robert J. – Journal of Educational Measurement, 2016
Validity is the sine qua non of properties of educational assessment. While a theory of validity and a practical framework for validation has emerged over the past decades, most of the discussion has addressed familiar forms of assessment and psychological framings. Advances in digital technologies and in cognitive and social psychology have…
Descriptors: Test Validity, Technology, Cognitive Psychology, Social Psychology
Peer reviewed Peer reviewed
Allalouf, Avi; Ben-Shakhar, Gershon – Journal of Educational Measurement, 1998
Examined how coaching affects the predictive validity and fairness of scholastic aptitude tests. A coached (n=271) and uncoached (n=95) group were compared. Comparison revealed that although coaching enhanced scores on the Israeli Psychometric Entrance Test by about 25% of a standard deviation, it did not create a prediction bias or affect…
Descriptors: College Entrance Examinations, High School Students, High Schools, Higher Education
Peer reviewed Peer reviewed
Linn, Robert L.; Hastings, C. Nicholas – Journal of Educational Measurement, 1984
Using predictive validity studies of the Law School Admissions Test (LSAT) and the undergraduate grade-point average (UGPA), this study examined the large variation in the magnitude of the validity coefficients across schools. LSAT standard deviation and correlation between LSAT and UGPA accounted for 58.5 percent of the variability. (Author/EGS)
Descriptors: Academic Achievement, College Applicants, College Entrance Examinations, Grade Point Average
Peer reviewed Peer reviewed
Sawyer, Richard; Maxey, James – Journal of Educational Measurement, 1979
College freshmen's grade point averages at 260 colleges were predicted on the basis of multiple regression equations using the four previous classes separately to compute the equations. Predictor variables were four American College Test (ACT) scores and high school grades. Predictions remained accurate over the four-year period. (Author/CTM)
Descriptors: College Entrance Examinations, College Freshmen, Grade Prediction, Grades (Scholastic)
Peer reviewed Peer reviewed
Hanna, Gerald S. – Journal of Educational Measurement, 1975
An alternative to the conventional right-wrong scoring method used on multiple-choice tests was presented. In the experiment, the examinee continued to respond to a multiple-choice item until feedback signified a correct answer. Findings showed that experimental scores were more reliable but less valid than inferred conventional scores.…
Descriptors: Feedback, Higher Education, Multiple Choice Tests, Scoring
Peer reviewed Peer reviewed
Bejar, Isaac I.; Doyle, Kenneth O. – Journal of Educational Measurement, 1976
The relationship between naturally-occurring student expectations about the instructor and later student evaluations of that instructor was studied. It was found that students were capable of rating their instructors independently of expectations held prior to the course. (Author/BW)
Descriptors: Expectation, Higher Education, Rating Scales, Student Evaluation of Teacher Performance
Peer reviewed Peer reviewed
Hartke, Alan R. – Journal of Educational Measurement, 1978
Latent partition analysis is shown to be useful in determining the conceptual homogeneity of an item population. Such item populations are useful for mastery testing. Applications of latent partition analysis in assessing content validity are suggested. (Author/JKS)
Descriptors: Higher Education, Item Analysis, Item Sampling, Mastery Tests
Peer reviewed Peer reviewed
Linn, Robert L. – Journal of Educational Measurement, 1983
When the precise basis of selection effect on correlation and regression equations is unknown but can be modeled by selection on a variable that is highly but not perfectly related to observed scores, the selection effects can lead to the commonly observed "overprediction" results in studies of predictive bias. (Author/PN)
Descriptors: Bias, Correlation, Higher Education, Prediction
Peer reviewed Peer reviewed
Howard, George S.; And Others – Journal of Educational Measurement, 1979
Evaluations of experimental interventions which employ self-report measures are subject to contamination known as response-shift bias. Response-shift effects may be attenuated by substituting retrospective pretest ratings for the traditional self-report pretest ratings. This study indicated that the retrospective rating more accurately reflected…
Descriptors: Higher Education, Rating Scales, Response Style (Tests), Self Evaluation
Peer reviewed Peer reviewed
Gross, Alan L.; Shulman, Vivian – Journal of Educational Measurement, 1980
The suitability of the beta binomial test model for criterion referenced testing was investigated, first by considering whether underlying assumptions are realistic, and second, by examining the robustness of the model. Results suggest that the model may have practical value. (Author/RD)
Descriptors: Criterion Referenced Tests, Goodness of Fit, Higher Education, Item Sampling
Peer reviewed Peer reviewed
Sawyer, Richard – Journal of Educational Measurement, 1996
Decision theory is a useful method for assessing the effectiveness of the components of a course placement system. The effectiveness of placement tests or other variables in identifying underprepared students is described by the conditional probability of success in a standard course. Estimating the conditional probability of success is discussed.…
Descriptors: College Students, Estimation (Mathematics), Higher Education, Mathematical Models
Peer reviewed Peer reviewed
Ward, William C.; And Others – Journal of Educational Measurement, 1980
Free response and machine-scorable versions of a test called Formulating Hypotheses were compared with respect to construct validity. Results indicate that the different forms involve different cognitive processes and measure different qualities. (Author/JKS)
Descriptors: Cognitive Processes, Cognitive Tests, Higher Education, Personality Traits
Peer reviewed Peer reviewed
Kolen, Michael J.; Whitney, Douglas R. – Journal of Educational Measurement, 1978
Nine methods of smoothing double-entry expectancy tables (tables that relate two predictor variables to probability of attaining success on a criterion) were compared using data for entering students at 85 colleges and universities. The smoothed tables were more accurate than those based on observed relative frequencies. (Author/CTM)
Descriptors: College Entrance Examinations, Expectancy Tables, Grade Prediction, High Schools
Peer reviewed Peer reviewed
Chalifour, Clark L.; Powers, Donald E. – Journal of Educational Measurement, 1989
Content characteristics of 1,400 Graduate Record Examination (GRE) analytical reasoning items were coded for item difficulty and discrimination. The results provide content characteristics for consideration in extending specifications for analytical reasoning items and a better understanding of the construct validity of these items. (TJH)
Descriptors: College Entrance Examinations, Construct Validity, Content Analysis, Difficulty Level
Peer reviewed Peer reviewed
Prediger, Dale; Hanson, Gary – Journal of Educational Measurement, 1977
Raw-score reports of vocational interest, personality traits and other psychological constructs are coming into common use. Using college seniors' scores on the American College Test Interest Inventory, criterion-related validity of standard scores based on same-sex and combined-sex norms was equal to or greater than that of raw scores.…
Descriptors: Higher Education, Interest Inventories, Majors (Students), Norms
Previous Page | Next Page ยป
Pages: 1  |  2  |  3