NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)0
Since 2006 (last 20 years)7
Audience
Researchers3
What Works Clearinghouse Rating
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Akour, Mutasem; Sabah, Saed; Hammouri, Hind – Journal of Psychoeducational Assessment, 2015
The purpose of this study was to apply two types of Differential Item Functioning (DIF), net and global DIF, as well as the framework of Differential Step Functioning (DSF) to real testing data to investigate measurement invariance related to test language. Data from the Program for International Student Assessment (PISA)-2006 polytomously scored…
Descriptors: Test Bias, Science Tests, Test Items, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Albano, Anthony D. – Journal of Educational Measurement, 2013
In many testing programs it is assumed that the context or position in which an item is administered does not have a differential effect on examinee responses to the item. Violations of this assumption may bias item response theory estimates of item and person parameters. This study examines the potentially biasing effects of item position. A…
Descriptors: Test Items, Item Response Theory, Test Format, Questioning Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Sinharay, Sandip; Dorans, Neil J.; Liang, Longjuan – Educational Measurement: Issues and Practice, 2011
Over the past few decades, those who take tests in the United States have exhibited increasing diversity with respect to native language. Standard psychometric procedures for ensuring item and test fairness that have existed for some time were developed when test-taking groups were predominantly native English speakers. A better understanding of…
Descriptors: Test Bias, Testing Programs, Psychometrics, Language Proficiency
Bilir, Mustafa Kuzey – ProQuest LLC, 2009
This study uses a new psychometric model (mixture item response theory-MIMIC model) that simultaneously estimates differential item functioning (DIF) across manifest groups and latent classes. Current DIF detection methods investigate DIF from only one side, either across manifest groups (e.g., gender, ethnicity, etc.), or across latent classes…
Descriptors: Test Items, Testing Programs, Markov Processes, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Puhan, Gautam; Moses, Timothy P.; Yu, Lei; Dorans, Neil J. – Journal of Educational Measurement, 2009
This study examined the extent to which log-linear smoothing could improve the accuracy of differential item functioning (DIF) estimates in small samples of examinees. Examinee responses from a certification test were analyzed using White examinees in the reference group and African American examinees in the focal group. Using a simulation…
Descriptors: Test Items, Reference Groups, Testing Programs, Raw Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E.; Mapuranga, Raymond – International Journal of Testing, 2009
Differential item functioning (DIF) analysis is a statistical technique used for ensuring the equity and fairness of educational assessments. This study formulates a new DIF analysis method using the information similarity index (ISI). ISI compares item information functions when data fits the Rasch model. Through simulations and an international…
Descriptors: Test Bias, Evaluation Methods, Test Items, Educational Assessment
Peer reviewed Peer reviewed
Hills, John R. – Educational Measurement: Issues and Practice, 1989
Test bias detection methods based on item response theory (IRT) are reviewed. Five such methods are commonly used: (1) equality of item parameters; (2) area between item characteristic curves; (3) sums of squares; (4) pseudo-IRT; and (5) one-parameter-IRT. A table compares these and six newer or less tested methods. (SLD)
Descriptors: Item Analysis, Test Bias, Test Items, Testing Programs
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kamata, Akihito; Vaughn, Brandon K. – Learning Disabilities: A Contemporary Journal, 2004
This article provides a brief primer overview of Differential Item Functioning (DIF) analysis. DIF analysis investigates a differential characteristic of a test item between subpopulations of examinees and is useful in detecting possibly biased items toward a particular subpopulation. As demonstration, a dataset from a 40-item math test in a…
Descriptors: Test Bias, Testing Accommodations, Test Items, Testing Programs
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J.; Ariel, Adelaide; Veldkamp, Bernard P. – Journal of Educational and Behavioral Statistics, 2006
Test-item writing efforts typically results in item pools with an undesirable correlational structure between the content attributes of the items and their statistical information. If such pools are used in computerized adaptive testing (CAT), the algorithm may be forced to select items with less than optimal information, that violate the content…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items, Item Banks
Gelin, Michaela N.; Zumbo, Bruno D. – 2003
This study introduced and demonstrated a new methodology for item and test bias studies: moderated differential item functioning (DIF). This technique expands the DIF methodology to incorporate contextual and sociological variables as moderating effects of the DIF. The study explored differential domain functioning (DDF), so that the focus of…
Descriptors: Community Influence, Context Effect, Ecological Factors, Educational Sociology
Crovo, Mary L.; Phillips, Gary W. – 1983
This paper presents the dual approach to item bias detection employed in the Maryland Functional Testing Program (MFTP). Using instructional objectives mandated by the Maryland State Board of Education, the MFTP develops two levels of the Maryland Functional Reading Test (MFRT) and the Maryland Functional Mathematics Tests (MFMT). These…
Descriptors: Advisory Committees, Criterion Referenced Tests, Culture Fair Tests, Item Analysis
Hill, Richard K. – 1979
Four problems faced by the staff of the California Assessment Program (CAP) were solved by applying Rasch scaling techniques: (1) item cultural bias in the Entry Level Test (ELT) given to all first grade pupils; (2) nonlinear regression analysis of the third grade Reading Test scores; (3) comparison of school growth from grades two to three, using…
Descriptors: Black Students, Cultural Differences, Data Analysis, Difficulty Level
Shafer, Robert E. – 1986
In Arizona, beginning teachers applying for certification must take the Arizona Teacher Proficiency Examination which tests professional knowledge, reading, mathematics, and grammar. The high failure rate on the grammar test has caused a great deal of concern; 40 percent of the examinees, and a higher percentage of minority groups, failed it in…
Descriptors: Elementary School Teachers, Elementary Secondary Education, Grammar, Higher Education
Ohio State Dept. of Education, Columbus. – 1995
The Ohio Sixth-grade Proficiency Tests are designed to measure a sixth-grade level of literacy and basic competence. Beginning in March 1996, Ohio sixth graders will take proficiency tests in writing, reading, mathematics, citizenship, and science. Both teachers and administrators have been involved in the test development process, establishing…
Descriptors: Academic Achievement, Achievement Tests, Citizenship, Competence
Noble, Christopher S.; And Others – 1986
The relationship between item omission and item position on criterion-referenced tests in the Texas state assessment program is examined. Item statistics from the Texas Educational Assessment of Minimum Skills (TEAMS) and Texas Assessment of Basic Skills (TABS) mathematics and reading tests from 1983 through 1985 are examined for three ethnic…
Descriptors: Basic Skills, Blacks, Criterion Referenced Tests, Ethnic Groups