NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)3
Since 2006 (last 20 years)7
Audience
Location
Italy1
Qatar1
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Quesen, Sarah; Lane, Suzanne – Applied Measurement in Education, 2019
This study examined the effect of similar vs. dissimilar proficiency distributions on uniform DIF detection on a statewide eighth grade mathematics assessment. Results from the similar- and dissimilar-ability reference groups with an SWD focal group were compared for four models: logistic regression, hierarchical generalized linear model (HGLM),…
Descriptors: Test Items, Mathematics Tests, Grade 8, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Sulis, Isabella; Toland, Michael D. – Journal of Early Adolescence, 2017
Item response theory (IRT) models are the main psychometric approach for the development, evaluation, and refinement of multi-item instruments and scaling of latent traits, whereas multilevel models are the primary statistical method when considering the dependence between person responses when primary units (e.g., students) are nested within…
Descriptors: Hierarchical Linear Modeling, Item Response Theory, Psychometrics, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Walker, A. Adrienne; Engelhard, George, Jr. – Applied Measurement in Education, 2015
The idea that test scores may not be valid representations of what students know, can do, and should learn next is well known. Person fit provides an important aspect of validity evidence. Person fit analyses at the individual student level are not typically conducted and person fit information is not communicated to educational stakeholders. In…
Descriptors: Test Validity, Goodness of Fit, Educational Assessment, Hierarchical Linear Modeling
Quesen, Sarah – ProQuest LLC, 2016
When studying differential item functioning (DIF) with students with disabilities (SWD) focal groups typically suffer from small sample size, whereas the reference group population is usually large. This makes it possible for a researcher to select a sample from the reference population to be similar to the focal group on the ability scale. Doing…
Descriptors: Test Items, Academic Accommodations (Disabilities), Testing Accommodations, Disabilities
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Jaekyung; Liu, Xiaoyan; Amo, Laura Casey; Wang, Weichun Leilani – Educational Policy, 2014
Drawing on national and state assessment datasets in reading and math, this study tested "external" versus "internal" standards-based education models. The goal was to understand whether and how student performance standards work in multilayered school systems under No Child Left Behind Act of 2001 (NCLB). Under the…
Descriptors: State Standards, Academic Standards, Student Evaluation, Academic Achievement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yen, Wendy M.; Lall, Venessa F.; Monfils, Lora – ETS Research Report Series, 2012
Alternatives to vertical scales are compared for measuring longitudinal academic growth and for producing school-level growth measures. The alternatives examined were empirical cross-grade regression, ordinary least squares and logistic regression, and multilevel models. The student data used for the comparisons were Arabic Grades 4 to 10 in…
Descriptors: Foreign Countries, Scaling, Item Response Theory, Test Interpretation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Vaughn, Brandon K. – Journal on School Educational Technology, 2008
This study considers the importance of contextual effects on the quality of assessments on item bias and differential item functioning (DIF) in measurement. Often, in educational studies, students are clustered in teachers or schools, and the clusters could impact psychometric issues yet are largely ignored by traditional item analyses. A…
Descriptors: Test Bias, Educational Assessment, Educational Quality, Context Effect