NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 526 to 540 of 3,711 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Benítez, Isabel; Padilla, José-Luis; Hidalgo Montesinos, María Dolores; Sireci, Stephen G. – Applied Measurement in Education, 2016
Analysis of differential item functioning (DIF) is often used to determine if cross-lingual assessments are equivalent across languages. However, evidence on the causes of cross-lingual DIF is still evasive. Expert appraisal is a qualitative method useful for obtaining detailed information about problematic elements in the different linguistic…
Descriptors: Test Bias, Mixed Methods Research, Questionnaires, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Jones, Ian; Wheadon, Chris; Humphries, Sara; Inglis, Matthew – British Educational Research Journal, 2016
Advanced-level (A-level) mathematics is a high-profile qualification taken by many school leavers in England, Wales, Northern Ireland and around the world as preparation for university study. Concern has been expressed in these countries that standards in A-level mathematics have declined over time, and that school leavers enter university or the…
Descriptors: Foreign Countries, College Mathematics, Secondary School Mathematics, Academic Standards
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Becky H.; Flores, Belinda Bustos – Language Assessment Quarterly, 2018
This test review aims to provide a critical evaluation of the English Language Proficiency Assessment for the 21st Century (ELPA 21), a large-scale, standards-based English language proficiency testcurrently used in eight participating member states. The ELPA 21 assessment system was developed by a consortium to help English language learner (ELL)…
Descriptors: English, Language Proficiency, Language Tests, English Language Learners
Peer reviewed Peer reviewed
Direct linkDirect link
Huggins, Anne Corinne – Educational and Psychological Measurement, 2014
Invariant relationships in the internal mechanisms of estimating achievement scores on educational tests serve as the basis for concluding that a particular test is fair with respect to statistical bias concerns. Equating invariance and differential item functioning are both concerned with invariant relationships yet are treated separately in the…
Descriptors: Test Bias, Test Items, Equated Scores, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Shih, Ching-Lin; Liu, Tien-Hsiang; Wang, Wen-Chung – Educational and Psychological Measurement, 2014
The simultaneous item bias test (SIBTEST) method regression procedure and the differential item functioning (DIF)-free-then-DIF strategy are applied to the logistic regression (LR) method simultaneously in this study. These procedures are used to adjust the effects of matching true score on observed score and to better control the Type I error…
Descriptors: Test Bias, Regression (Statistics), Test Items, True Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Lundqvist, Lars-Olov; Lindner, Helen – Journal of Autism and Developmental Disorders, 2017
The Autism-Spectrum Quotient (AQ) is among the most widely used scales assessing autistic traits in the general population. However, some aspects of the AQ are questionable. To test its scale properties, the AQ was translated into Swedish, and data were collected from 349 adults, 130 with autism spectrum disorder (ASD) and 219 without ASD, and…
Descriptors: Autism, Pervasive Developmental Disorders, Adults, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Wong, Vivian C.; Valentine, Jeffrey C.; Miller-Bains, Kate – Journal of Research on Educational Effectiveness, 2017
This article summarizes results from 12 empirical evaluations of observational methods in education contexts. We look at the performance of three common covariate-types in observational studies where the outcome is a standardized reading or math test. They are: pretest measures, local geographic matching, and rich covariate sets with a strong…
Descriptors: Observation, Educational Research, Standardized Tests, Reading Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Domingue, Benjamin W.; Lang, David; Cuevas, Martha; Castellanos, Melisa; Lopera, Carolina; Mariño, Julián P.; Molina, Adriana; Shavelson, Richard J. – AERA Open, 2017
Technical schools are an integral part of the education system, and yet, little is known about student learning at such institutions. We consider whether assessments of student learning can be jointly administered to both university and technical school students. We examine whether differential test functioning may bias inferences regarding the…
Descriptors: Academic Achievement, Foreign Countries, Vocational Schools, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Sari, Halil Ibrahim; Huggins, Anne Corinne – Educational and Psychological Measurement, 2015
This study compares two methods of defining groups for the detection of differential item functioning (DIF): (a) pairwise comparisons and (b) composite group comparisons. We aim to emphasize and empirically support the notion that the choice of pairwise versus composite group definitions in DIF is a reflection of how one defines fairness in DIF…
Descriptors: Test Bias, Comparative Analysis, Statistical Analysis, College Entrance Examinations
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E.; Jurich, Daniel P. – Educational and Psychological Measurement, 2015
In educational testing, differential item functioning (DIF) statistics must be accurately estimated to ensure the appropriate items are flagged for inspection or removal. This study showed how using the Rasch model to estimate DIF may introduce considerable bias in the results when there are large group differences in ability (impact) and the data…
Descriptors: Test Bias, Guessing (Tests), Ability, Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Tay, Louis; Huang, Qiming; Vermunt, Jeroen K. – Educational and Psychological Measurement, 2016
In large-scale testing, the use of multigroup approaches is limited for assessing differential item functioning (DIF) across multiple variables as DIF is examined for each variable separately. In contrast, the item response theory with covariate (IRT-C) procedure can be used to examine DIF across multiple variables (covariates) simultaneously. To…
Descriptors: Item Response Theory, Test Bias, Simulation, College Entrance Examinations
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jin, Ying; Eason, Hershel – Journal of Educational Issues, 2016
The effects of mean ability difference (MAD) and short tests on the performance of various DIF methods have been studied extensively in previous simulation studies. Their effects, however, have not been studied under multilevel data structure. MAD was frequently observed in large-scale cross-country comparison studies where the primary sampling…
Descriptors: Test Bias, Simulation, Hierarchical Linear Modeling, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Elder, Catherine – Language Testing, 2018
There is a strong international demand for official certification of French competence. A range of tests are available to meet this goal. Among the recognized tests available for this purpose is the "DELF" ("Diplôme d'études en langue française"). The "DELF" has been awarded the Association of Language Testers in…
Descriptors: French, Second Language Learning, Language Proficiency, Competence
Buckley, Jack, Ed.; Letukas, Lynn, Ed.; Wildavsky, Ben, Ed. – Johns Hopkins University Press, 2018
For more than seventy-five years, standardized tests have been considered a vital tool for gauging students' readiness for college. However, few people--including students, parents, teachers, and policy makers--understand how tests like the SAT or ACT are used in admissions decisions. Once touted as the best way to compare students from diverse…
Descriptors: Student Evaluation, Standardized Tests, College Entrance Examinations, Admission Criteria
Smarter Balanced Assessment Consortium, 2019
The Smarter Balanced Assessment Consortium (Smarter Balanced) strives to provide every student with a positive and productive assessment experience, generating results that are a fair and accurate estimate of each student's achievement. Further, Smarter Balanced is building on a framework of accessibility for all students, including English…
Descriptors: Student Evaluation, Evaluation Methods, English Language Learners, Students with Disabilities
Pages: 1  |  ...  |  32  |  33  |  34  |  35  |  36  |  37  |  38  |  39  |  40  |  ...  |  248