NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
No Child Left Behind Act 20013
What Works Clearinghouse Rating
Showing 1 to 15 of 82 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Yuting; Zhang, Jihong; Jiang, Zhehan; Shi, Dexin – Educational and Psychological Measurement, 2023
In the literature of modern psychometric modeling, mostly related to item response theory (IRT), the fit of model is evaluated through known indices, such as X[superscript 2], M2, and root mean square error of approximation (RMSEA) for absolute assessments as well as Akaike information criterion (AIC), consistent AIC (CAIC), and Bayesian…
Descriptors: Goodness of Fit, Psychometrics, Error of Measurement, Item Response Theory
Xue Zhang; Chun Wang – Grantee Submission, 2022
Item-level fit analysis not only serves as a complementary check to global fit analysis, it is also essential in scale development because the fit results will guide item revision and/or deletion (Liu & Maydeu-Olivares, 2014). During data collection, missing response data may likely happen due to various reasons. Chi-square-based item fit…
Descriptors: Goodness of Fit, Item Response Theory, Scores, Test Length
Merkle, Edgar C.; Fitzsimmons, Ellen; Uanhoro, James; Goodrich, Ben – Grantee Submission, 2021
Structural equation models comprise a large class of popular statistical models, including factor analysis models, certain mixed models, and extensions thereof. Model estimation is complicated by the fact that we typically have multiple interdependent response variables and multiple latent variables (which may also be called random effects or…
Descriptors: Bayesian Statistics, Structural Equation Models, Psychometrics, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
McCloskey, George – Journal of Psychoeducational Assessment, 2017
This commentary will take an historical perspective on the Kaufman Test of Educational Achievement (KTEA) error analysis, discussing where it started, where it is today, and where it may be headed in the future. In addition, the commentary will compare and contrast the KTEA error analysis procedures that are rooted in psychometric methodology and…
Descriptors: Achievement Tests, Error Patterns, Comparative Analysis, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Heritage, Margaret; Kingston, Neal M. – Journal of Educational Measurement, 2019
Classroom assessment and large-scale assessment have, for the most part, existed in mutual isolation. Some experts have felt this is for the best and others have been concerned that the schism limits the potential contribution of both forms of assessment. Margaret Heritage has long been a champion of best practices in classroom assessment. Neal…
Descriptors: Measurement, Psychometrics, Context Effect, Classroom Environment
Peer reviewed Peer reviewed
Direct linkDirect link
Leaman, Marion C.; Edmonds, Lisa A. – Journal of Speech, Language, and Hearing Research, 2021
Purpose: This study evaluated interrater reliability (IRR) and test-retest stability (TRTS) of seven linguistic measures (percent correct information units, relevance, subject-verb-[object], complete utterance, grammaticality, referential cohesion, global coherence), and communicative success in unstructured conversation and in a story narrative…
Descriptors: Aphasia, Psychometrics, Correlation, Speech Language Pathology
Peer reviewed Peer reviewed
Direct linkDirect link
Salisbury, Jason; Goff, Peter; Blitz, Mark – Journal of School Leadership, 2019
Initiatives to increase leadership accountability coupled with efforts to promote data-driven leadership have led to widespread adoption of instruments to assess school leaders. In this article, we present a decision matrix that practitioners and researchers can use to facilitate instrument selection. Our decision matrix focuses on the…
Descriptors: Comparative Analysis, Feedback (Response), Accountability, Instructional Leadership
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Khamboonruang, Apichat – rEFLections, 2022
Although much research has compared the functioning between analytic and holistic rating scales, little research has compared the functioning of binary rating scales with other types of rating scales. This quantitative study set out to preliminarily and comparatively validate binary and analytic rating scales intended for use in formative…
Descriptors: Writing Evaluation, Evaluation Methods, Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Martin-Fernandez, Manuel; Revuelta, Javier – Psicologica: International Journal of Methodology and Experimental Psychology, 2017
This study compares the performance of two estimation algorithms of new usage, the Metropolis-Hastings Robins-Monro (MHRM) and the Hamiltonian MCMC (HMC), with two consolidated algorithms in the psychometric literature, the marginal likelihood via EM algorithm (MML-EM) and the Markov chain Monte Carlo (MCMC), in the estimation of multidimensional…
Descriptors: Bayesian Statistics, Item Response Theory, Models, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Blikstein, Paulo; Kabayadondo, Zaza; Martin, Andrew; Fields, Deborah – Journal of Engineering Education, 2017
Background: As the maker movement is increasingly adopted into K-12 schools, students are developing new competences in exploration and fabrication technologies. This study assesses learning with these technologies in K-12 makerspaces and FabLabs. Purpose: Our study describes the iterative process of developing an assessment instrument for this…
Descriptors: Technological Literacy, Engineering Education, Skill Development, Entrepreneurship
Peer reviewed Peer reviewed
Direct linkDirect link
Raikes, Abbie; Sayre, Rebecca; Davis, Dawn; Anderson, Kate; Hyson, Marilou; Seminario, Evelyn; Burton, Anna – Early Years: An International Journal of Research and Development, 2019
Measuring Early Learning Quality & Outcomes (MELQO) was initiated to address needs for child development and quality of early childhood education (ECE) data, specifically for low- and middle-income countries. Drawing from existing tools, MELQO convened a consortium to create open-source tools to be adapted to national contexts, simultaneously…
Descriptors: Educational Quality, Outcomes of Education, Child Development, Early Childhood Education
Peer reviewed Peer reviewed
Direct linkDirect link
Fletcher, Jack M.; Stuebing, Karla K.; Barth, Amy E.; Miciak, Jeremy; Francis, David J.; Denton, Carolyn – Topics in Language Disorders, 2014
Purpose: Agreement across methods for identifying students as inadequate responders or as learning disabled is often poor. We report (1) an empirical examination of final status (postintervention benchmarks) and dual-discrepancy growth methods based on growth during the intervention and final status for assessing response to intervention and (2) a…
Descriptors: Response to Intervention, Comparative Analysis, Simulation, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Hou, Likun; de la Torre, Jimmy; Nandakumar, Ratna – Journal of Educational Measurement, 2014
Analyzing examinees' responses using cognitive diagnostic models (CDMs) has the advantage of providing diagnostic information. To ensure the validity of the results from these models, differential item functioning (DIF) in CDMs needs to be investigated. In this article, the Wald test is proposed to examine DIF in the context of CDMs. This study…
Descriptors: Test Bias, Models, Simulation, Error Patterns
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Liu, Yan; Zumbo, Bruno D.; Gustafson, Paul; Huang, Yi; Kroc, Edward; Wu, Amery D. – Practical Assessment, Research & Evaluation, 2016
A variety of differential item functioning (DIF) methods have been proposed and used for ensuring that a test is fair to all test takers in a target population in the situations of, for example, a test being translated to other languages. However, once a method flags an item as DIF, it is difficult to conclude that the grouping variable (e.g.,…
Descriptors: Test Items, Test Bias, Probability, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Jennrich, Robert I.; Bentler, Peter M. – Psychometrika, 2012
Bi-factor analysis is a form of confirmatory factor analysis originally introduced by Holzinger and Swineford ("Psychometrika" 47:41-54, 1937). The bi-factor model has a general factor, a number of group factors, and an explicit bi-factor structure. Jennrich and Bentler ("Psychometrika" 76:537-549, 2011) introduced an exploratory form of bi-factor…
Descriptors: Factor Structure, Factor Analysis, Models, Comparative Analysis
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6