NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 20 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Shi, Dingjing; Tong, Xin – Sociological Methods & Research, 2022
This study proposes a two-stage causal modeling with instrumental variables to mitigate selection bias, provide correct standard error estimates, and address nonnormal and missing data issues simultaneously. Bayesian methods are used for model estimation. Robust methods with Student's "t" distributions are used to account for nonnormal…
Descriptors: Bayesian Statistics, Monte Carlo Methods, Computer Software, Causal Models
Peer reviewed Peer reviewed
Direct linkDirect link
Shen, Ting; Konstantopoulos, Spyros – Journal of Experimental Education, 2022
Large-scale education data are collected via complex sampling designs that incorporate clustering and unequal probability of selection. Multilevel models are often utilized to account for clustering effects. The probability weighted approach (PWA) has been frequently used to deal with the unequal probability of selection. In this study, we examine…
Descriptors: Data Collection, Educational Research, Hierarchical Linear Modeling, Bayesian Statistics
Joshua B. Gilbert; James S. Kim; Luke W. Miratrix – Annenberg Institute for School Reform at Brown University, 2024
Longitudinal models of individual growth typically emphasize between-person predictors of change but ignore how growth may vary "within" persons because each person contributes only one point at each time to the model. In contrast, modeling growth with multi-item assessments allows evaluation of how relative item performance may shift…
Descriptors: Vocabulary Development, Item Response Theory, Test Items, Student Development
Peer reviewed Peer reviewed
Direct linkDirect link
Joshua B. Gilbert; James S. Kim; Luke W. Miratrix – Applied Measurement in Education, 2024
Longitudinal models typically emphasize between-person predictors of change but ignore how growth varies "within" persons because each person contributes only one data point at each time. In contrast, modeling growth with multi-item assessments allows evaluation of how relative item performance may shift over time. While traditionally…
Descriptors: Vocabulary Development, Item Response Theory, Test Items, Student Development
Peer reviewed Peer reviewed
Direct linkDirect link
Pavel Chernyavskiy; Traci S. Kutaka; Carson Keeter; Julie Sarama; Douglas Clements – Grantee Submission, 2024
When researchers code behavior that is undetectable or falls outside of the validated ordinal scale, the resultant outcomes often suffer from informative missingness. Incorrect analysis of such data can lead to biased arguments around efficacy and effectiveness in the context of experimental and intervention research. Here, we detail a new…
Descriptors: Bayesian Statistics, Mathematics Instruction, Learning Trajectories, Item Response Theory
Joshua B. Gilbert; James S. Kim; Luke W. Miratrix – Annenberg Institute for School Reform at Brown University, 2022
Analyses that reveal how treatment effects vary allow researchers, practitioners, and policymakers to better understand the efficacy of educational interventions. In practice, however, standard statistical methods for addressing Heterogeneous Treatment Effects (HTE) fail to address the HTE that may exist within outcome measures. In this study, we…
Descriptors: Item Response Theory, Models, Formative Evaluation, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Lockwood, J. R.; Castellano, Katherine E.; McCaffrey, Daniel F. – Journal of Educational and Behavioral Statistics, 2022
Many states and school districts in the United States use standardized test scores to compute annual measures of student achievement progress and then use school-level averages of these growth measures for various reporting and diagnostic purposes. These aggregate growth measures can vary consequentially from year to year for the same school,…
Descriptors: Accuracy, Prediction, Programming Languages, Standardized Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Schweig, Jonathan David; Pane, John F. – International Journal of Research & Method in Education, 2016
Demands for scientific knowledge of what works in educational policy and practice has driven interest in quantitative investigations of educational outcomes, and randomized controlled trials (RCTs) have proliferated under these conditions. In educational settings, even when individuals are randomized, both experimental and control students are…
Descriptors: Randomized Controlled Trials, Educational Research, Multivariate Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Smith, Lindsey J. Wolff; Beretvas, S. Natasha – Journal of Experimental Education, 2017
Conventional multilevel modeling works well with purely hierarchical data; however, pure hierarchies rarely exist in real datasets. Applied researchers employ ad hoc procedures to create purely hierarchical data. For example, applied educational researchers either delete mobile participants' data from the analysis or identify the student only with…
Descriptors: Student Mobility, Academic Achievement, Simulation, Influences
Peer reviewed Peer reviewed
Direct linkDirect link
Hawley, Leslie R.; Bovaird, James A.; Wu, ChaoRong – Applied Measurement in Education, 2017
Value-added assessment methods have been criticized by researchers and policy makers for a number of reasons. One issue includes the sensitivity of model results across different outcome measures. This study examined the utility of incorporating multivariate latent variable approaches within a traditional value-added framework. We evaluated the…
Descriptors: Value Added Models, Reliability, Multivariate Analysis, Scaling
Peer reviewed Peer reviewed
Direct linkDirect link
Paz, Luciano; Goldin, Andrea P.; Diuk, Carlos; Sigman, Mariano – Cognitive Science, 2015
Seventy-three children between 6 and 7 years of age were presented with a problem having ambiguous subgoal ordering. Performance in this task showed reliable fingerprints: (a) a non-monotonic dependence of performance as a function of the distance between the beginning and the end-states of the problem, (b) very high levels of performance when the…
Descriptors: Grade 1, Elementary School Students, Play, Games
Peer reviewed Peer reviewed
Direct linkDirect link
Schatschneider, Christopher; Wagner, Richard K.; Hart, Sara A.; Tighe, Elizabeth L. – Scientific Studies of Reading, 2016
The present study employed data simulation techniques to investigate the 1-year stability of alternative classification schemes for identifying children with reading disabilities. Classification schemes investigated include low performance, unexpected low performance, dual-discrepancy, and a rudimentary form of constellation model of reading…
Descriptors: Reading Difficulties, Learning Disabilities, At Risk Students, Disability Identification
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kaplan, David; Chen, Jianshen – Society for Research on Educational Effectiveness, 2013
The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…
Descriptors: Bayesian Statistics, Models, Probability, Monte Carlo Methods
Yin, Liqun – ProQuest LLC, 2013
In recent years, many states have adopted Item Response Theory (IRT) based vertically scaled tests due to their compelling features in a growth-based accountability context. However, selection of a practical and effective calibration/scaling method and proper understanding of issues with possible multidimensionality in the test data is critical to…
Descriptors: Item Response Theory, Scaling, Robustness (Statistics), Monte Carlo Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
May, Henry – Society for Research on Educational Effectiveness, 2014
Interest in variation in program impacts--How big is it? What might explain it?--has inspired recent work on the analysis of data from multi-site experiments. One critical aspect of this problem involves the use of random or fixed effect estimates to visualize the distribution of impact estimates across a sample of sites. Unfortunately, unless the…
Descriptors: Educational Research, Program Effectiveness, Research Problems, Computation
Previous Page | Next Page »
Pages: 1  |  2