NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Bolt, Daniel M.; Liao, Xiangyi – Journal of Educational Measurement, 2021
We revisit the empirically observed positive correlation between DIF and difficulty studied by Freedle and commonly seen in tests of verbal proficiency when comparing populations of different mean latent proficiency levels. It is shown that a positive correlation between DIF and difficulty estimates is actually an expected result (absent any true…
Descriptors: Test Bias, Difficulty Level, Correlation, Verbal Tests
Peer reviewed Peer reviewed
Sheehan, Kathleen M. – Journal of Educational Measurement, 1997
A new procedure is proposed for generating instructionally relevant diagnostic feedback. The approach involves constructing a strong model of student proficiency and then testing whether individual students' observed item-response vectors are consistent with that model. The approach is applied to the Scholastic Assessment Test's verbal reasoning…
Descriptors: Academic Achievement, Educational Assessment, Educational Diagnosis, Feedback
Peer reviewed Peer reviewed
Smith, Richard M. – Journal of Educational Measurement, 1987
Partial knowledge was assessed in a multiple choice vocabulary test. Test reliability and concurrent validity were compared using Rasch-based dichotomous and polychotomous scoring models. Results supported the polychtomous scoring model, and moderately supported J. O'Connor's theory of vocabulary acquisition. (Author/GDC)
Descriptors: Adults, Higher Education, Knowledge Level, Latent Trait Theory
Peer reviewed Peer reviewed
Green, Bert F.; And Others – Journal of Educational Measurement, 1989
A method of analyzing test item responses is advocated to examine differential item functioning through distractor choices of those answering an item incorrectly. The analysis uses log-linear models of a three-way contingency table, and is illustrated in an analysis of the verbal portion of the Scholastic Aptitude Test. (TJH)
Descriptors: College Entrance Examinations, Distractors (Tests), Evaluation Methods, High School Students
Peer reviewed Peer reviewed
Angoff, William H. – Journal of Educational Measurement, 1989
Data from 10,000 college applicants, who had taken the Scholastic Aptitude Test-Verbal, were used to assess claims that guessing improves formula scores. Number of items attempted, a guessing index, the formula score, and an approximation of an ability score were obtained. Success in guessing is proportional to examinee ability. (TJH)
Descriptors: Academic Ability, College Applicants, College Entrance Examinations, Guessing (Tests)
Peer reviewed Peer reviewed
Zeidner, Moshe – Journal of Educational Measurement, 1988
A feedback inventory was administered to 259 seventh graders in Israel immediately following standardized group scholastic ability testing procedures to determine the subjects' attitudes toward the procedures. Few meaningful differences appeared across sociocultural or gender groups. Test attitudes did correlate with scores. (TJH)
Descriptors: Academic Ability, Cultural Differences, Feedback, Grade 7
Peer reviewed Peer reviewed
Wen, Shih-Sung – Journal of Educational Measurement, 1975
The relationship between students' scores on a verbal meaning test and their degrees of confidence in item responses was investigated. Subjects were black undergraduate students and they were administered a verbal meaning test by following a confidence testing procedure. (Author/BJG)
Descriptors: Blacks, Confidence Testing, Higher Education, Language Skills
Peer reviewed Peer reviewed
Dorans, Neil J.; Kingston, Neal M. – Journal of Educational Measurement, 1985
Since The Graduate Record Examination-Verbal measures two factors (reading comprehension and discrete verbal ability), the unidimensionality of item response theory is violated. The impact of this violation was examined by comparing three ability estimates: reading, discrete, and all verbal. Both dimensions were highly correlated; the impact was…
Descriptors: College Entrance Examinations, Factor Structure, Graduate Study, Higher Education
Peer reviewed Peer reviewed
Dorans, Neil J.; Livingston, Samuel A. – Journal of Educational Measurement, 1987
This study investigated the hypothesis that females who score high on the Mathematical portion of Scholastic Aptitude Test do so because they have high verbal skills, whereas some males score high on the mathematics despite their relatively low verbal skills. Evidence for and against the hypothesis was observed. (Author/JAZ)
Descriptors: College Entrance Examinations, Females, High Schools, Hypothesis Testing
Peer reviewed Peer reviewed
Embretson, Susan; And Others – Journal of Educational Measurement, 1986
This study examined the influence of processing strategies, and the metacomponents that determine when to apply them, on the construct validity of a verbal reasoning test. A rule-oriented strategy, an association strategy, and a partial rule strategy were examined. All three strategies contributed to individual differences in verbal reasoning.…
Descriptors: Cognitive Processes, Elementary Secondary Education, Error of Measurement, Latent Trait Theory
Peer reviewed Peer reviewed
Bennett, Randy Elliot; And Others – Journal of Educational Measurement, 1987
To identify broad classes of items on the Scholastic Aptitude Test that behave differentally for handicapped examinees taking special, extended time administrations, the performance of nine handicapped groups and one nonhandicapped group on each of two forms of the SAT was investigated through a two-stage procedure. (Author/LMO)
Descriptors: College Entrance Examinations, Disabilities, Hearing Impairments, High Schools
Peer reviewed Peer reviewed
Breland, Hunter M.; Gaynor, Judith L. – Journal of Educational Measurement, 1979
Over 2,000 writing samples were collected from four undergraduate institutions and compared, where possible, with scores on a multiple-choice test. High correlations between ratings of the writing samples and multiple-choice test scores were obtained. Samples contributed substantially to the prediction of both college grades and writing…
Descriptors: Achievement Tests, Comparative Testing, Correlation, Essay Tests
Peer reviewed Peer reviewed
Freedle, Roy; Kostin, Irene – Journal of Educational Measurement, 1990
The importance of item difficulty (equated delta) was explored as a predictor of differential item functioning of Black versus White examinees for 4 verbal item types using 13 Graduate Record Examination forms and 11 Scholastic Aptitude Test forms. Several significant racial differences were found. (TJH)
Descriptors: Black Students, College Bound Students, College Entrance Examinations, Comparative Testing
Peer reviewed Peer reviewed
Sireci, Stephen G.; And Others – Journal of Educational Measurement, 1991
Calculating the reliability of a testlet-based test is demonstrated using data from 1,812 males and 2,216 females taking the Scholastic Aptitude Test verbal section and 3,866 examinees taking another reading test. Traditional reliabilities calculated on reading comprehension tests constructed of four testlets provided substantial overestimates.…
Descriptors: College Entrance Examinations, Equations (Mathematics), Estimation (Mathematics), High School Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Madaus, George F.; Rippey, Robert M. – Journal of Educational Measurement, 1966
The validity of the multiple-choice Sequential Tests of Educational Progress (STEP) Writing Test (1957) was tested by the University of Chicago Center for the Cooperative Study of Instruction. Seven criteria developed by the center to score essay assignments were used to determine the relationship between STEP and actual writing behavior. Of the…
Descriptors: Communication (Thought Transfer), Educational Testing, English Instruction, Evaluation Criteria