NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 706 to 720 of 3,711 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Abedalaziz, Nabeel; Leng, Chin Hai; Alahmadi, Ahlam – Malaysian Online Journal of Educational Sciences, 2014
The purpose of the study was to examine gender differences in performance on multiple-choice mathematical ability test, administered within the context of high school graduation test that was designed to match eleventh grade curriculum. The transformed item difficulty (TID) was used to detect a gender related DIF. A random sample of 1400 eleventh…
Descriptors: Test Bias, Test Items, Difficulty Level, Gender Differences
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Atalay Kabasakal, Kübra; Arsan, Nihan; Gök, Bilge; Kelecioglu, Hülya – Educational Sciences: Theory and Practice, 2014
This simulation study compared the performances (Type I error and power) of Mantel-Haenszel (MH), SIBTEST, and item response theory-likelihood ratio (IRT-LR) methods under certain conditions. Manipulated factors were sample size, ability differences between groups, test length, the percentage of differential item functioning (DIF), and underlying…
Descriptors: Comparative Analysis, Item Response Theory, Statistical Analysis, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Bennink, Margot; Croon, Marcel A.; Keuning, Jos; Vermunt, Jeroen K. – Journal of Educational and Behavioral Statistics, 2014
In educational measurement, responses of students on items are used not only to measure the ability of students, but also to evaluate and compare the performance of schools. Analysis should ideally account for the multilevel structure of the data, and school-level processes not related to ability, such as working climate and administration…
Descriptors: Academic Ability, Educational Assessment, Educational Testing, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Kennet-Cohen, Tamar; Turvall, Elliot; Oren, Carmel – Assessment in Education: Principles, Policy & Practice, 2014
This study examined selection bias in Israeli university admissions with respect to test language and gender, using three approaches for the detection of such bias: Cleary's model of differential prediction, boundary conditions for differential prediction and difference between "d's" (the Constant Ratio Model). The university admissions…
Descriptors: Foreign Countries, College Admission, Test Bias, Gender Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Benítez, Isabel; Padilla, José-Luis – Journal of Mixed Methods Research, 2014
Differential item functioning (DIF) can undermine the validity of cross-lingual comparisons. While a lot of efficient statistics for detecting DIF are available, few general findings have been found to explain DIF results. The objective of the article was to study DIF sources by using a mixed method design. The design involves a quantitative phase…
Descriptors: Foreign Countries, Mixed Methods Research, Test Bias, Cross Cultural Studies
Raddatz, Mikaela M.; Royal, Kenneth D.; Pennington, Jessica – Online Submission, 2012
The purpose of this study is to determine if the construct of a medical subspecialty examination, as defined by the hierarchy of item difficulties, is stable across physicians who completed a fellowship and recertifiers as compared to non-fellows. Three comparisons of groups are made: 1) Practice pathway board candidates compared to members of all…
Descriptors: Evidence, Fellowships, Board Candidates, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
International Journal of Testing, 2019
These guidelines describe considerations relevant to the assessment of test takers in or across countries or regions that are linguistically or culturally diverse. The guidelines were developed by a committee of experts to help inform test developers, psychometricians, test users, and test administrators about fairness issues in support of the…
Descriptors: Test Bias, Student Diversity, Cultural Differences, Language Usage
Partnership for Assessment of Readiness for College and Careers, 2019
The Partnership for Assessment of Readiness for College and Careers (PARCC) is a state-led consortium designed to create next-generation assessments that, compared to traditional K-12 assessments, more accurately measure student progress toward college and career readiness. The PARCC assessments are aligned to the Common Core State Standards…
Descriptors: College Readiness, Career Readiness, Common Core State Standards, Language Arts
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Regional Educational Laboratory Midwest, 2019
These are the appendixes for the report "Children's Knowledge and Skills at Kindergarten Entry in Illinois: Results from the First Statewide Administration of the Kindergarten Individual Development Survey." At least half of states administer or are developing kindergarten entry assessments. In fall 2017 the Illinois State Board of…
Descriptors: Kindergarten, School Readiness, Public Schools, Test Validity
Laurie, Robert; Sloat, Elizabeth – Canadian Journal of Education, 2016
This research investigates key psychometric properties of the French Early Years Evaluation-Teacher Assessment measure designed to systematically assess kindergarten children across five social and academic developmental domains: awareness of self and environment, social skills and behaviour, cognitive abilities, language and communication, and…
Descriptors: Psychometrics, Teacher Evaluation, French, Foreign Countries
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Donovan, Courtney; Green, Kathy E.; Seidel, Kent – Leadership and Research in Education, 2017
Core competencies essential for effective teaching were identified via a literature review and a review of standards for teacher education, and vetted by state groups with interests in teacher education. Survey items based on these competencies asked teacher candidates, graduates, and teacher education program faculty how well the program prepared…
Descriptors: Teacher Effectiveness, Item Response Theory, Item Analysis, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Shermis, Mark D.; Mao, Liyang; Mulholland, Matthew; Kieftenbeld, Vincent – International Journal of Testing, 2017
This study uses the feature sets employed by two automated scoring engines to determine if a "linguistic profile" could be formulated that would help identify items that are likely to exhibit differential item functioning (DIF) based on linguistic features. Sixteen items were administered to 1200 students where demographic information…
Descriptors: Computer Assisted Testing, Scoring, Hypothesis Testing, Essays
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dorans, Neil J. – ETS Research Report Series, 2013
Quantitative fairness procedures have been developed and modified by ETS staff over the past several decades. ETS has been a leader in fairness assessment, and its efforts are reviewed in this report. The first section deals with differential prediction and differential validity procedures that examine whether test scores predict a criterion, such…
Descriptors: Test Bias, Statistical Analysis, Test Validity, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
French, Brian F.; Gotch, Chad M. – Journal of Psychoeducational Assessment, 2013
The Brigance Comprehensive Inventory of Basic Skills-II (CIBS-II) is a diagnostic battery intended for children in grades 1st through 6th. The aim of this study was to test for item invariance, or differential item functioning (DIF), of the CIBS-II across sex in the standardization sample through the use of item response theory DIF detection…
Descriptors: Gender Differences, Elementary School Students, Test Bias, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Woods, Carol M.; Cai, Li; Wang, Mian – Educational and Psychological Measurement, 2013
Differential item functioning (DIF) occurs when the probability of responding in a particular category to an item differs for members of different groups who are matched on the construct being measured. The identification of DIF is important for valid measurement. This research evaluates an improved version of Lord's X[superscript 2] Wald test for…
Descriptors: Test Bias, Item Response Theory, Computation, Comparative Analysis
Pages: 1  |  ...  |  44  |  45  |  46  |  47  |  48  |  49  |  50  |  51  |  52  |  ...  |  248