NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Laws, Policies, & Programs
Assessments and Surveys
Massachusetts Comprehensive…1
What Works Clearinghouse Rating
Showing 1 to 15 of 32 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Milica Miocevic; Fayette Klaassen; Mariola Moeyaert; Gemma G. M. Geuke – Journal of Experimental Education, 2025
Mediation analysis in Single Case Experimental Designs (SCEDs) evaluates intervention mechanisms for individuals. Despite recent methodological developments, no clear guidelines exist for maximizing power to detect the indirect effect in SCEDs. This study compares frequentist and Bayesian methods, determining (1) minimum required sample size to…
Descriptors: Research Design, Mediation Theory, Statistical Analysis, Simulation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deke, John; Finucane, Mariel; Thal, Daniel – National Center for Education Evaluation and Regional Assistance, 2022
BASIE is a framework for interpreting impact estimates from evaluations. It is an alternative to null hypothesis significance testing. This guide walks researchers through the key steps of applying BASIE, including selecting prior evidence, reporting impact estimates, interpreting impact estimates, and conducting sensitivity analyses. The guide…
Descriptors: Bayesian Statistics, Educational Research, Data Interpretation, Hypothesis Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Hoofs, Huub; van de Schoot, Rens; Jansen, Nicole W. H.; Kant, IJmert – Educational and Psychological Measurement, 2018
Bayesian confirmatory factor analysis (CFA) offers an alternative to frequentist CFA based on, for example, maximum likelihood estimation for the assessment of reliability and validity of educational and psychological measures. For increasing sample sizes, however, the applicability of current fit statistics evaluating model fit within Bayesian…
Descriptors: Goodness of Fit, Bayesian Statistics, Factor Analysis, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Man, Kaiwen; Harring, Jeffery R.; Ouyang, Yunbo; Thomas, Sarah L. – International Journal of Testing, 2018
Many important high-stakes decisions--college admission, academic performance evaluation, and even job promotion--depend on accurate and reliable scores from valid large-scale assessments. However, examinees sometimes cheat by copying answers from other test-takers or practicing with test items ahead of time, which can undermine the effectiveness…
Descriptors: Reaction Time, High Stakes Tests, Test Wiseness, Cheating
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhang, Zhidong – International Education Studies, 2018
This study explored a diagnostic assessment method that emphasized the cognitive process of algebra learning. The study utilized a design and a theory-driven model to examine the content knowledge. Using the theory driven model, the thinking skills of algebra learning was also examined. A Bayesian network model was applied to represent the theory…
Descriptors: Algebra, Bayesian Statistics, Scores, Mathematics Achievement
Christ, Theodore J.; Desjardins, Christopher David – Journal of Psychoeducational Assessment, 2018
Curriculum-Based Measurement of Oral Reading (CBM-R) is often used to monitor student progress and guide educational decisions. Ordinary least squares regression (OLSR) is the most widely used method to estimate the slope, or rate of improvement (ROI), even though published research demonstrates OLSR's lack of validity and reliability, and…
Descriptors: Bayesian Statistics, Curriculum Based Assessment, Oral Reading, Least Squares Statistics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Miratrix, Luke; Feller, Avi; Pillai, Natesh; Pati, Debdeep – Society for Research on Educational Effectiveness, 2016
Modeling the distribution of site level effects is an important problem, but it is also an incredibly difficult one. Current methods rely on distributional assumptions in multilevel models for estimation. There it is hoped that the partial pooling of site level estimates with overall estimates, designed to take into account individual variation as…
Descriptors: Probability, Models, Statistical Distributions, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Liang, Longjuan; Browne, Michael W. – Journal of Educational and Behavioral Statistics, 2015
If standard two-parameter item response functions are employed in the analysis of a test with some newly constructed items, it can be expected that, for some items, the item response function (IRF) will not fit the data well. This lack of fit can also occur when standard IRFs are fitted to personality or psychopathology items. When investigating…
Descriptors: Item Response Theory, Statistical Analysis, Goodness of Fit, Bayesian Statistics
Guarino, Cassandra M.; Maxfield, Michelle; Reckase, Mark D.; Thompson, Paul; Wooldridge, Jeffrey M. – Education Policy Center at Michigan State University, 2014
Empirical Bayes' (EB) estimation is a widely used procedure to calculate teacher value-added. It is primarily viewed as a way to make imprecise estimates more reliable. In this paper we review the theory of EB estimation and use simulated data to study its ability to properly rank teachers. We compare the performance of EB estimators with that of…
Descriptors: Teacher Evaluation, Bayesian Statistics, Comparative Analysis, Teacher Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
McNeish, Daniel – Review of Educational Research, 2017
In education research, small samples are common because of financial limitations, logistical challenges, or exploratory studies. With small samples, statistical principles on which researchers rely do not hold, leading to trust issues with model estimates and possible replication issues when scaling up. Researchers are generally aware of such…
Descriptors: Models, Statistical Analysis, Sampling, Sample Size
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Andrade, Alejandro; Danish, Joshua A.; Maltese, Adam V. – Journal of Learning Analytics, 2017
Interactive learning environments with body-centric technologies lie at the intersection of the design of embodied learning activities and multimodal learning analytics. Sensing technologies can generate large amounts of fine-grained data automatically captured from student movements. Researchers can use these fine-grained data to create a…
Descriptors: Measurement, Interaction, Models, Educational Environment
Crawford, Aaron – ProQuest LLC, 2014
This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…
Descriptors: Bayesian Statistics, Networks, Models, Goodness of Fit
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Pattanayak, Cassandra W.; Rubin, Donald B.; Zell, Elizabeth R. – Society for Research on Educational Effectiveness, 2013
In educational research, outcome measures are often estimated across separate studies or across schools, districts, or other subgroups to assess the overall causal effect of an active treatment versus a control treatment. Students may be partitioned into such strata or blocks by experimental design, or separated into studies within a…
Descriptors: Computation, Outcome Measures, Statistical Analysis, Graduation
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – Applied Psychological Measurement, 2012
A testlet is a cluster of items that share a common passage, scenario, or other context. These items might measure something in common beyond the trait measured by the test as a whole; if so, the model for the item responses should allow for this testlet trait. But modeling testlet effects that are negligible makes the model unnecessarily…
Descriptors: Test Items, Item Response Theory, Comparative Analysis, Models
MacDonald, George T. – ProQuest LLC, 2014
A simulation study was conducted to explore the performance of the linear logistic test model (LLTM) when the relationships between items and cognitive components were misspecified. Factors manipulated included percent of misspecification (0%, 1%, 5%, 10%, and 15%), form of misspecification (under-specification, balanced misspecification, and…
Descriptors: Simulation, Item Response Theory, Models, Test Items
Previous Page | Next Page ยป
Pages: 1  |  2  |  3