Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 7 |
Descriptor
Source
College Entrance Examination… | 2 |
Journal of Educational… | 2 |
Applied Psychological… | 1 |
Educational Testing Service | 1 |
Journal of Educational and… | 1 |
Journal of Mixed Methods… | 1 |
Journal of Psychoeducational… | 1 |
Author
Green, Donald Ross | 2 |
Akour, Mutasem | 1 |
Albano, Anthony D. | 1 |
Allina, Amy | 1 |
Ariel, Adelaide | 1 |
Armstrong, Bill | 1 |
Benítez, Isabel | 1 |
Breland, Hunter M. | 1 |
Bridgeman, Brent | 1 |
Davis, Josephine D. | 1 |
Dorans, Neil J. | 1 |
More ▼ |
Publication Type
Reports - Research | 31 |
Speeches/Meeting Papers | 13 |
Journal Articles | 6 |
Information Analyses | 2 |
Numerical/Quantitative Data | 2 |
Opinion Papers | 1 |
Reports - Descriptive | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 4 |
Postsecondary Education | 3 |
Elementary Secondary Education | 1 |
Audience
Researchers | 5 |
Location
United States | 3 |
Arizona | 1 |
Canada | 1 |
Denmark | 1 |
Florida | 1 |
France | 1 |
Greece | 1 |
New Mexico | 1 |
Slovakia | 1 |
Spain | 1 |
Texas | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Akour, Mutasem; Sabah, Saed; Hammouri, Hind – Journal of Psychoeducational Assessment, 2015
The purpose of this study was to apply two types of Differential Item Functioning (DIF), net and global DIF, as well as the framework of Differential Step Functioning (DSF) to real testing data to investigate measurement invariance related to test language. Data from the Program for International Student Assessment (PISA)-2006 polytomously scored…
Descriptors: Test Bias, Science Tests, Test Items, Scoring
Benítez, Isabel; Padilla, José-Luis – Journal of Mixed Methods Research, 2014
Differential item functioning (DIF) can undermine the validity of cross-lingual comparisons. While a lot of efficient statistics for detecting DIF are available, few general findings have been found to explain DIF results. The objective of the article was to study DIF sources by using a mixed method design. The design involves a quantitative phase…
Descriptors: Foreign Countries, Mixed Methods Research, Test Bias, Cross Cultural Studies
Albano, Anthony D. – Journal of Educational Measurement, 2013
In many testing programs it is assumed that the context or position in which an item is administered does not have a differential effect on examinee responses to the item. Violations of this assumption may bias item response theory estimates of item and person parameters. This study examines the potentially biasing effects of item position. A…
Descriptors: Test Items, Item Response Theory, Test Format, Questioning Techniques
Paek, Insu; Guo, Hongwen – Applied Psychological Measurement, 2011
This study examined how much improvement was attainable with respect to accuracy of differential item functioning (DIF) measures and DIF detection rates in the Mantel-Haenszel procedure when employing focal and reference groups with notably unbalanced sample sizes where the focal group has a fixed small sample which does not satisfy the minimum…
Descriptors: Test Bias, Accuracy, Reference Groups, Investigations
French, Brian F.; Finch, W. Holmes – Journal of Educational Measurement, 2010
The purpose of this study was to examine the performance of differential item functioning (DIF) assessment in the presence of a multilevel structure that often underlies data from large-scale testing programs. Analyses were conducted using logistic regression (LR), a popular, flexible, and effective tool for DIF detection. Data were simulated…
Descriptors: Test Bias, Testing Programs, Evaluation, Measurement
Dorans, Neil J.; Liu, Jinghua – Educational Testing Service, 2009
The equating process links scores from different editions of the same test. For testing programs that build nearly parallel forms to the same explicit content and statistical specifications and administer forms under the same conditions, the linkings between the forms are expected to be equatings. Score equity assessment (SEA) provides a useful…
Descriptors: Testing Programs, Mathematics Tests, Quality Control, Psychometrics
Green, Donald Ross; Yen, Wendy M. – 1983
The Comprehensive Tests of Basic Skills, Form U, is scored in two ways: number-correct and pattern. The latter makes use of the information about which particular items are answered correctly, giving more weight to the more discriminating items and making allowances for guessing. Critics have suggested that black students are penalized by pattern…
Descriptors: Basic Skills, Black Students, Elementary Education, Guessing (Tests)
Allina, Amy; And Others – 1987
Seven schools that have re-evaluated their needs for standardized college admissions examinations were studied to explore their admissions and innovative testing policies. The schools include: (1) Bates College in Lewiston, Maine; (2) Bowdoin College in Brunswick, Maine; (3) Harvard Graduate School of Business Administration in Cambridge,…
Descriptors: Admission Criteria, College Admission, College Entrance Examinations, Colleges
Hall, Carroll L. – 1984
Based on 18 months of extensive research and study, the New Mexico State Department of Education developed the Staff Accountability Plan to address the issue of teacher accountability and certification. One of the provisions of the plan (a written assessment of general and professional knowledge for initial certification) will be fulfilled by…
Descriptors: Beginning Teachers, Cutting Scores, Elementary Secondary Education, State Programs
Green, Donald Ross – 1987
Differential functioning of males and females on achievement test items was studied in a sample of 110,000 students in kindergarten through grade 12. First, item parameters and ability estimates for these parameters were obtained from LOGIST computer program runs for the entire group. Predicted performance for each item for each group was…
Descriptors: Achievement Tests, Educational Testing, Elementary Secondary Education, Item Analysis
van der Linden, Wim J.; Ariel, Adelaide; Veldkamp, Bernard P. – Journal of Educational and Behavioral Statistics, 2006
Test-item writing efforts typically results in item pools with an undesirable correlational structure between the content attributes of the items and their statistical information. If such pools are used in computerized adaptive testing (CAT), the algorithm may be forced to select items with less than optimal information, that violate the content…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items, Item Banks
Wendler, Cathy; Feigenbaum, Miriam; Escandón, Mérida – College Entrance Examination Board, 2001
The SAT Program undertook two studies aimed at evaluating the impact of allowing students to indicate more than one ethnic/racial category. Results of this study indicated that there is little impact on DIF [differential item functioning] analyses when different definitions of ethnic/racial classifications are used compared to traditionally…
Descriptors: Group Membership, Definitions, Cluster Grouping, Racial Differences
Hill, Kennedy T. – 1980
This program of research has three general thrusts. First, the relations between motivation and achievement performance were studied across children of various sociocultural backgrounds including lower- and middle- class white, black, and hispanic children. Motivational test bias was found to be strong for students of all sociocultural…
Descriptors: Achievement Tests, Elementary Education, Motivation, Racial Differences
Ferrara, Steven F. – 1987
The necessity of controlling the order in which trained essay raters for a statewide writing assessment program receive student essays was studied. The underlying theoretical question concerns possible rater bias caused by raters reading long strings of essays of homogeneous quality; this problem is usually referred to as context effect or…
Descriptors: Context Effect, Essay Tests, Evaluators, Graduation Requirements
Massachusetts State Dept. of Education, Boston. Bureau of Research and Assessment. – 1982
Since the approval of the Basic Skills Improvement Policy in 1978, the Massachusetts Department of Education has been developing tests and alternative forms for the assessment of student achievement in five basic skills content areas: reading, writing, mathematics, listening, and speaking. Because of the lack of previous research on which to draw…
Descriptors: Achievement Tests, Basic Skills, Elementary Secondary Education, Policy Formation