NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)2
Since 2006 (last 20 years)12
Audience
Laws, Policies, & Programs
No Child Left Behind Act 20011
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Quesen, Sarah; Lane, Suzanne – Applied Measurement in Education, 2019
This study examined the effect of similar vs. dissimilar proficiency distributions on uniform DIF detection on a statewide eighth grade mathematics assessment. Results from the similar- and dissimilar-ability reference groups with an SWD focal group were compared for four models: logistic regression, hierarchical generalized linear model (HGLM),…
Descriptors: Test Items, Mathematics Tests, Grade 8, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Sulis, Isabella; Toland, Michael D. – Journal of Early Adolescence, 2017
Item response theory (IRT) models are the main psychometric approach for the development, evaluation, and refinement of multi-item instruments and scaling of latent traits, whereas multilevel models are the primary statistical method when considering the dependence between person responses when primary units (e.g., students) are nested within…
Descriptors: Hierarchical Linear Modeling, Item Response Theory, Psychometrics, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Hecht, Martin; Weirich, Sebastian; Siegle, Thilo; Frey, Andreas – Educational and Psychological Measurement, 2015
The selection of an appropriate booklet design is an important element of large-scale assessments of student achievement. Two design properties that are typically optimized are the "balance" with respect to the positions the items are presented and with respect to the mutual occurrence of pairs of items in the same booklet. The purpose…
Descriptors: Measurement, Computation, Test Format, Test Items
Cho, Sun-Joo; Bottge, Brian A. – Grantee Submission, 2015
In a pretest-posttest cluster-randomized trial, one of the methods commonly used to detect an intervention effect involves controlling pre-test scores and other related covariates while estimating an intervention effect at post-test. In many applications in education, the total post-test and pre-test scores that ignores measurement error in the…
Descriptors: Item Response Theory, Hierarchical Linear Modeling, Pretests Posttests, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Bennink, Margot; Croon, Marcel A.; Keuning, Jos; Vermunt, Jeroen K. – Journal of Educational and Behavioral Statistics, 2014
In educational measurement, responses of students on items are used not only to measure the ability of students, but also to evaluate and compare the performance of schools. Analysis should ideally account for the multilevel structure of the data, and school-level processes not related to ability, such as working climate and administration…
Descriptors: Academic Ability, Educational Assessment, Educational Testing, Test Bias
Chung, Gregory K. W. K.; Choi, Kilchan; Baker, Eva L.; Cai, Li – National Center for Research on Evaluation, Standards, and Student Testing (CRESST), 2014
A large-scale randomized controlled trial tested the effects of researcher-developed learning games on a transfer measure of fractions knowledge. The measure contained items similar to standardized assessments. Thirty treatment and 29 control classrooms (~1500 students, 9 districts, 26 schools) participated in the study. Students in treatment…
Descriptors: Video Games, Educational Games, Mathematics Instruction, Mathematics
Peer reviewed Peer reviewed
Direct linkDirect link
Cui, Ying; Mousavi, Amin – International Journal of Testing, 2015
The current study applied the person-fit statistic, l[subscript z], to data from a Canadian provincial achievement test to explore the usefulness of conducting person-fit analysis on large-scale assessments. Item parameter estimates were compared before and after the misfitting student responses, as identified by l[subscript z], were removed. The…
Descriptors: Measurement, Achievement Tests, Comparative Analysis, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Murphy, Daniel L.; Beretvas, S. Natasha – Applied Measurement in Education, 2015
This study examines the use of cross-classified random effects models (CCrem) and cross-classified multiple membership random effects models (CCMMrem) to model rater bias and estimate teacher effectiveness. Effect estimates are compared using CTT versus item response theory (IRT) scaling methods and three models (i.e., conventional multilevel…
Descriptors: Teacher Effectiveness, Comparative Analysis, Hierarchical Linear Modeling, Test Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Jaekyung; Liu, Xiaoyan; Amo, Laura Casey; Wang, Weichun Leilani – Educational Policy, 2014
Drawing on national and state assessment datasets in reading and math, this study tested "external" versus "internal" standards-based education models. The goal was to understand whether and how student performance standards work in multilayered school systems under No Child Left Behind Act of 2001 (NCLB). Under the…
Descriptors: State Standards, Academic Standards, Student Evaluation, Academic Achievement
Cho, Sun-Joo; Cohen, Allan S.; Bottge, Brian – Grantee Submission, 2013
A multilevel latent transition analysis (LTA) with a mixture IRT measurement model (MixIRTM) is described for investigating the effectiveness of an intervention. The addition of a MixIRTM to the multilevel LTA permits consideration of both potential heterogeneity in students' response to instructional intervention as well as a methodology for…
Descriptors: Intervention, Item Response Theory, Statistical Analysis, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yen, Wendy M.; Lall, Venessa F.; Monfils, Lora – ETS Research Report Series, 2012
Alternatives to vertical scales are compared for measuring longitudinal academic growth and for producing school-level growth measures. The alternatives examined were empirical cross-grade regression, ordinary least squares and logistic regression, and multilevel models. The student data used for the comparisons were Arabic Grades 4 to 10 in…
Descriptors: Foreign Countries, Scaling, Item Response Theory, Test Interpretation
Wale, Christine M. – ProQuest LLC, 2013
Digital games are widely popular and interest has increased for their use in education. Digital games are thought to be powerful instructional tools because they promote active learning and feedback, provide meaningful contexts to situate knowledge, create engagement and intrinsic motivation, and have the ability individualize instruction.…
Descriptors: Academic Achievement, Mathematics, Mathematics Instruction, Mathematical Aptitude
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Johnson, Matthew S.; Jenkins, Frank – ETS Research Report Series, 2005
Large-scale educational assessments such as the National Assessment of Educational Progress (NAEP) sample examinees to whom an exam will be administered. In most situations the sampling design is not a simple random sample and must be accounted for in the estimating model. After reviewing the current operational estimation procedure for NAEP, this…
Descriptors: Bayesian Statistics, Hierarchical Linear Modeling, National Competency Tests, Sampling