NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers2
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 25 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Beheshti, Shima; Safa, Mohammad Ahmadi – Iranian Journal of Language Teaching Research, 2023
The indefinite nature of test fairness and different interpretations and definitions of the concept have stirred a lot of controversy over the years, necessitating the reconceptualization of the concept. On this basis, this study aimed to explore the empirical validity of Kunnan's (2008) Test Fairness Framework (TFF) and revisit the established…
Descriptors: Test Bias, Equal Education, Grounded Theory, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tim Jacobbe; Bob delMas; Brad Hartlaub; Jeff Haberstroh; Catherine Case; Steven Foti; Douglas Whitaker – Numeracy, 2023
The development of assessments as part of the funded LOCUS project is described. The assessments measure students' conceptual understanding of statistics as outlined in the GAISE PreK-12 Framework. Results are reported from a large-scale administration to 3,430 students in grades 6 through 12 in the United States. Items were designed to assess…
Descriptors: Statistics Education, Common Core State Standards, Student Evaluation, Elementary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Qi Huang; Daniel M. Bolt; Weicong Lyu – Large-scale Assessments in Education, 2024
Large scale international assessments depend on invariance of measurement across countries. An important consideration when observing cross-national differential item functioning (DIF) is whether the DIF actually reflects a source of bias, or might instead be a methodological artifact reflecting item response theory (IRT) model misspecification.…
Descriptors: Test Items, Item Response Theory, Test Bias, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
El Masri, Yasmine H.; Andrich, David – Applied Measurement in Education, 2020
In large-scale educational assessments, it is generally required that tests are composed of items that function invariantly across the groups to be compared. Despite efforts to ensure invariance in the item construction phase, for a range of reasons (including the security of items) it is often necessary to account for differential item…
Descriptors: Models, Goodness of Fit, Test Validity, Achievement Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Baghaei, Purya; Kubinger, Klaus D. – Practical Assessment, Research & Evaluation, 2015
The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…
Descriptors: Item Response Theory, Models, Test Validity, Hypothesis Testing
Hou, Likun – ProQuest LLC, 2013
Analyzing examinees' responses using cognitive diagnostic models (CDMs) have the advantages of providing richer diagnostic information. To ensure the validity of the results from these models, differential item functioning (DIF) in CDMs needs to be investigated. In this dissertation, the model-based DIF detection method, Wald-CDM procedure is…
Descriptors: Test Bias, Models, Cognitive Processes, Diagnostic Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Bornstein, Robert F. – Psychological Assessment, 2011
Although definitions of validity have evolved considerably since L. J. Cronbach and P. E. Meehl's classic (1955) review, contemporary validity research continues to emphasize correlational analyses assessing predictor-criterion relationships, with most outcome criteria being self-reports. The present article describes an alternative way of…
Descriptors: Test Validity, Scores, Models, Psychological Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Kane, Michael – Language Testing, 2010
This paper presents the author's critique on Xiaoming Xi's article, "How do we go about investigating test fairness?," which lays out a broad framework for studying fairness as comparable validity across groups within the population of interest. Xi proposes to develop a fairness argument that would identify and evaluate potential fairness-based…
Descriptors: Test Bias, Test Validity, Language Tests, Testing
Goldhaber, Dan; Chaplin, Duncan – Center for Education Data & Research, 2012
In a provocative and influential paper, Jesse Rothstein (2010) finds that standard value added models (VAMs) suggest implausible future teacher effects on past student achievement, a finding that obviously cannot be viewed as causal. This is the basis of a falsification test (the Rothstein falsification test) that appears to indicate bias in VAM…
Descriptors: School Effectiveness, Teacher Effectiveness, Achievement Gains, Statistical Bias
Peoples, Shelagh – ProQuest LLC, 2012
The purpose of this study was to determine which of three competing models will provide, reliable, interpretable, and responsive measures of elementary students' understanding of the nature of science (NOS). The Nature of Science Instrument-Elementary (NOSI-E), a 28-item Rasch-based instrument, was used to assess students' NOS…
Descriptors: Scientific Principles, Science Tests, Elementary School Students, Item Response Theory
Tristan, Agustin; Vidal, Rafael – Online Submission, 2007
Wright and Stone had proposed three features to assess the quality of the distribution of the items difficulties in a test, on the so called "most probable response map": line, stack and gap. Once a line is accepted as a design model for a test, gaps and stacks are practically eliminated, producing an evidence of the "scale…
Descriptors: Test Validity, Models, Difficulty Level, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Young, John W. – Educational Assessment, 2009
In this article, I specify a conceptual framework for test validity research on content assessments taken by English language learners (ELLs) in U.S. schools in grades K-12. This framework is modeled after one previously delineated by Willingham et al. (1988), which was developed to guide research on students with disabilities. In this framework…
Descriptors: Test Validity, Evaluation Research, Achievement Tests, Elementary Secondary Education
Peer reviewed Peer reviewed
Direct linkDirect link
Kush, Joseph C.; Watkins, Marley W. – Canadian Journal of School Psychology, 2007
Test bias research with Native American participants is uncommon, although individual tests of intelligence are often used with Native American students to determine eligibility for special education services. Only two studies with minimally adequate sample sizes have addressed the structural validity of major tests of intelligence in Native…
Descriptors: Test Bias, American Indians, School Psychologists, American Indian Education
Peer reviewed Peer reviewed
Lawshe, C. H. – Personnel Psychology, 1983
Discusses guidelines for fairness in employee selection required by the Federal Uniform Guidelines on Employee Selection Procedures, and presents a simplified procedure for the evaluation of fairness which is more easily explained than the classical Cleary model. (JAC)
Descriptors: Equal Opportunities (Jobs), Evaluation Methods, Guidelines, Models
Peer reviewed Peer reviewed
Harrington, Thomas F.; O'Shea, Arthur J. – Journal of Counseling Psychology, 1980
The confirmation of the Holland hexagonal model in a Spanish-speaking sample suggests that results of earlier factor analytic studies of vocational interests are applicable to Spanish-speaking young persons. Counselors must provide their Spanish-speaking clients with intensive information resources. (Author)
Descriptors: Career Development, Cultural Influences, Models, Occupational Tests
Previous Page | Next Page ยป
Pages: 1  |  2