NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)0
Since 2006 (last 20 years)8
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Liu, Jinghua; Zu, Jiyun; Curley, Edward; Carey, Jill – ETS Research Report Series, 2014
The purpose of this study is to investigate the impact of discrete anchor items versus passage-based anchor items on observed score equating using empirical data.This study compares an "SAT"® critical reading anchor that contains more discrete items proportionally, compared to the total tests to be equated, to another anchor that…
Descriptors: Equated Scores, Test Items, College Entrance Examinations, Comparative Analysis
Huang, Xiaoting – ProQuest LLC, 2010
In recent decades, the use of large-scale standardized international assessments has increased drastically as a way to evaluate and compare the quality of education across countries. In order to make valid international comparisons, the primary requirement is to ensure the measurement equivalence between the different language versions of these…
Descriptors: Test Bias, Comparative Testing, Foreign Countries, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Sooyeon; Walker, Michael E.; McHale, Frederick – Journal of Educational Measurement, 2010
In this study we examined variations of the nonequivalent groups equating design for tests containing both multiple-choice (MC) and constructed-response (CR) items to determine which design was most effective in producing equivalent scores across the two tests to be equated. Using data from a large-scale exam, this study investigated the use of…
Descriptors: Measures (Individuals), Scoring, Equated Scores, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Kato, Kentaro; Moen, Ross E.; Thurlow, Martha L. – Educational Measurement: Issues and Practice, 2009
Large data sets from a state reading assessment for third and fifth graders were analyzed to examine differential item functioning (DIF), differential distractor functioning (DDF), and differential omission frequency (DOF) between students with particular categories of disabilities (speech/language impairments, learning disabilities, and emotional…
Descriptors: Learning Disabilities, Language Impairments, Behavior Disorders, Affective Behavior
Peer reviewed Peer reviewed
Direct linkDirect link
Coe, Robert – Oxford Review of Education, 2008
The comparability of examinations in different subjects has been a controversial topic for many years and a number of criticisms have been made of statistical approaches to estimating the "difficulties" of achieving particular grades in different subjects. This paper argues that if comparability is understood in terms of a linking…
Descriptors: Test Items, Grades (Scholastic), Foreign Countries, Test Bias
Peer reviewed Peer reviewed
Ilai, Doron; Willerman, Lee – Intelligence, 1989
Items showing sex differences on the revised Wechsler Adult Intelligence Scale (WAIS-R) were studied. In a sample of 206 young adults (110 males and 96 females), 15 items demonstrated significant sex differences, but there was no relationship of item-specific gender content to sex differences in item performance. (SLD)
Descriptors: Comparative Testing, Females, Intelligence Tests, Item Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gu, Lixiong; Drake, Samuel; Wolfe, Edward W. – Journal of Technology, Learning, and Assessment, 2006
This study seeks to determine whether item features are related to observed differences in item difficulty (DIF) between computer- and paper-based test delivery media. Examinees responded to 60 quantitative items similar to those found on the GRE general test in either a computer-based or paper-based medium. Thirty-eight percent of the items were…
Descriptors: Test Bias, Test Items, Educational Testing, Student Evaluation
Peer reviewed Peer reviewed
Chipman, Susan F.; And Others – American Educational Research Journal, 1991
The effects of problem content on mathematics word problem performance were explored for 128 male and 128 female college students solving problems with masculine, feminine, and neutral (familiar and unfamiliar) cover stories. No effect of sex typing was found, and a small, but highly significant, effect was found for familiarity. (SLD)
Descriptors: College Students, Comparative Testing, Familiarity, Females
Pine, Steven M.; Weiss, David J. – 1978
This report examines how selection fairness is influenced by the characteristics of a selection instrument in terms of its distribution of item difficulties, level of item discrimination, degree of item bias, and testing strategy. Computer simulation was used in the administration of either a conventional or Bayesian adaptive ability test to a…
Descriptors: Adaptive Testing, Bayesian Statistics, Comparative Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
Engelhard, George, Jr. – Contemporary Educational Psychology, 1990
The relationship between gender and performance on mathematics items varying in level of cognitive complexity and content was assessed, using 1,789 female and 1,951 male Thai adolescents and 2,040 female and 1,884 male American adolescents. Data suggest that performance relative to both cognitive complexity and content is related to gender. (TJH)
Descriptors: Adolescents, Cognitive Ability, Comparative Testing, Cross Cultural Studies
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Johnson, Martin; Green, Sylvia – Journal of Technology, Learning, and Assessment, 2006
The transition from paper-based to computer-based assessment raises a number of important issues about how mode might affect children's performance and question answering strategies. In this project 104 eleven-year-olds were given two sets of matched mathematics questions, one set on-line and the other on paper. Facility values were analyzed to…
Descriptors: Student Attitudes, Computer Assisted Testing, Program Effectiveness, Elementary School Students
Owen, K. – 1989
Sources of item bias located in characteristics of the test item were studied in a reasoning test developed in South Africa. Subjects were 1,056 White, 1,063 Indian, and 1,093 Black students from standard 7 in Afrikaans and English schools. Format and content of the 85-item Reasoning Test were manipulated to obtain information about bias or…
Descriptors: Afrikaans, Black Students, Cognitive Tests, Comparative Testing
Peer reviewed Peer reviewed
Armstrong, Anne-Marie – Educational Measurement: Issues and Practice, 1993
The effects of test performance of differentially written multiple-choice tests and test takers' cognitive style were studied for 47 graduate students and 35 public school and college teachers. Adhering to test-writing item guidelines resulted in mean scores basically the same for two groups of differing cognitive style. (SLD)
Descriptors: Cognitive Style, College Faculty, Comparative Testing, Graduate Students
Kulick, Edward; Hu, P. Gillian – 1989
The relationship of differential item functioning (DIF) to item difficulty on the Scholastic Aptitude Test (SAT) was examined, based on data from nine recent administrations of the test from June 1986 through December 1987. This pool of information includes item statistics on 765 verbal and 540 mathematical items computed for subgroups of White,…
Descriptors: Asian Americans, Black Students, College Bound Students, College Entrance Examinations
Coffman, William E. – 1978
The Iowa Tests of Basic Skills were administered to over 600 black and white students in grades six through nine, to determine if the test showed bias against minorities. Outliers were identified from test results. Outliers are items which differ from the central core of test items because they fall outside the range expected from a random…
Descriptors: Achievement Tests, Basic Skills, Black Students, Comparative Testing
Previous Page | Next Page »
Pages: 1  |  2