NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers2
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 23 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Bolt, Daniel M.; Liao, Xiangyi – Journal of Educational Measurement, 2021
We revisit the empirically observed positive correlation between DIF and difficulty studied by Freedle and commonly seen in tests of verbal proficiency when comparing populations of different mean latent proficiency levels. It is shown that a positive correlation between DIF and difficulty estimates is actually an expected result (absent any true…
Descriptors: Test Bias, Difficulty Level, Correlation, Verbal Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Alpayar, Cagla; Gulleroglu, H. Deniz – Educational Research and Reviews, 2017
The aim of this research is to determine whether students' test performance and approaches to test questions change based on the type of mathematics questions (visual or verbal) administered to them. This research is based on a mixed-design model. The quantitative data are gathered from 297 seventh grade students, attending seven different middle…
Descriptors: Foreign Countries, Middle School Students, Grade 7, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Alexander, Patricia A.; Singer, Lauren M.; Jablansky, Sophie; Hattan, Courtney – Journal of Educational Psychology, 2016
This study investigated the relational reasoning capabilities of older adolescents and young adults when the focal assessment was a verbal and more schooled measure than 1 that was figural and more novel in its configuration. To achieve this end, the verbal test of relational reasoning (vTORR) was constructed to parallel the test of relational…
Descriptors: Thinking Skills, Adolescents, Young Adults, Cognitive Ability
Dorans, Neil J. – Educational Testing Service, 2010
Santelices and Wilson (2010) claimed to have addressed technical criticisms of Freedle (2003) presented in Dorans (2004a) and elsewhere. Santelices and Wilson's abstract claimed that their study confirmed that SAT[R] verbal items do function differently for African American and White subgroups. In this commentary, I demonstrate that the…
Descriptors: College Entrance Examinations, Verbal Tests, Test Bias, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Santelices, Maria Veronica; Wilson, Mark – Harvard Educational Review, 2010
In 2003, the "Harvard Educational Review" published a controversial article by Roy Freedle that claimed bias against African American students in the SAT college admissions test. Freedle's work stimulated national media attention and faced an onslaught of criticism from experts at the Educational Testing Service (ETS), the agency…
Descriptors: College Entrance Examinations, Test Bias, Test Items, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Allalouf, Avi; Rapp, Joel; Stoller, Reuven – International Journal of Testing, 2009
When a test is adapted from a source language (SL) into a target language (TL), the two forms are usually not psychometrically equivalent. If linking between test forms is necessary, those items that have had their psychometric characteristics altered by the translation (differential item functioning [DIF] items) should be eliminated from the…
Descriptors: Test Items, Test Format, Verbal Tests, Psychometrics
Peer reviewed Peer reviewed
Rocklin, Thomas; O'Donnell, Angela M. – Journal of Educational Psychology, 1987
An experiment was conducted that contrasted a variant of computerized adaptive testing, self-adapted testing, with two traditional tests. Participants completed a self-report of text anxiety and were randomly assigned to take one of the three tests of verbal ability. Subjects generally chose more difficult items as the test progressed. (Author/LMO)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Difficulty Level
Peer reviewed Peer reviewed
Van der Ven, Ad H. G. S. – Educational and Psychological Measurement, 1992
The dichotomous Rasch model was applied to verbal subtest scores on the Intelligence Structure Test Battery for 905 12- to 15-year-old secondary school students in the Netherlands. Results suggest that, if any factor is used to increase difficulty of items, that factor should be used on all items. (SLD)
Descriptors: Difficulty Level, Foreign Countries, Intelligence Tests, Secondary Education
Peer reviewed Peer reviewed
Gitomer, Drew H.; And Others – Journal of Educational Psychology, 1987
Processing of verbal analogies was evaluated by recording eye fixation patterns during solution of problems that represented a broad range of difficulty. Findings on easier problems replicated previous work. On difficult items, high verbal ability individuals adapted processing strategies to a greater extent then did low ability students.…
Descriptors: Analogy, Difficulty Level, Eye Fixations, Higher Education
Adams, Richard; And Others – 1993
The purpose of this study was to determine whether it is both possible and cost-effective to revise middle-difficulty Graduate Record Examinations (GRE) discrete items in order to produce items of higher or lower difficulty. The basic procedure was to select items of a given difficulty and, by revising the distractors, make them easier or more…
Descriptors: Analogy, College Entrance Examinations, Cost Effectiveness, Difficulty Level
Freedle, Roy; Kostin, Irene – 1988
The first of two studies reported examined the factors that predict differences in item responses for black and white matched examinees to analogies on the Graduate Record Examinations (GRE). Data were taken from 13 forms of the GRE Verbal Test, with a median sample size of 21,000 whites and a median sample size of 1,400 blacks for the purpose of…
Descriptors: Black Students, Difficulty Level, Ethnic Groups, Item Bias
Scheuneman, Janice; And Others – 1991
To help increase the understanding of sources of difficulty in test items, a study was undertaken to evaluate the effects of various aspects of prose complexity on the difficulty of achievement test items. The items of interest were those that presented a verbal stimulus followed by a question about the stimulus and a standard set of…
Descriptors: Achievement Tests, Difficulty Level, Goodness of Fit, Knowledge Level
O'Neill, Kathleen A.; And Others – 1993
The purpose of this study was to identify differentially functioning items on operational administrations of the Graduate Management Admission Test (GMAT) through the use of the Mantel-Haenszel statistic. Retrospective analyses of data collected over 3 years are reported for black/white and female/male comparisons for the Verbal and Quantitative…
Descriptors: Black Students, Classification, College Entrance Examinations, Difficulty Level
Pine, Steven M. – 1976
Latent trait theory is used as a basis for a definition of item bias, and an experiment is described in which a test of vocabulary items was not found to be biased when administered to two groups of high school students, one group of 58 black students and the other 168 white students. The experiment for the detection of bias began with separate…
Descriptors: Culture Fair Tests, Difficulty Level, Factor Analysis, Item Analysis
Schrader, William B. – 1984
Each of the three studies in this report examines a different aspect of the basic question of what the four item types (analogies, antonyms, reading comprehension, and sentence completion) on the Scholastic Aptitude Test (SAT) verbal section are measuring and of whether a change in the relative emphasis on the various item types would enhance the…
Descriptors: Achievement Tests, Analogy, Class Rank, College Entrance Examinations
Previous Page | Next Page ยป
Pages: 1  |  2