NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Early Childhood Longitudinal…1
What Works Clearinghouse Rating
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Fernández-Castilla, Belén; Declercq, Lies; Jamshidi, Laleh; Beretvas, S. Natasha; Onghena, Patrick; Van den Noortgate, Wim – Journal of Experimental Education, 2021
This study explores the performance of classical methods for detecting publication bias--namely, Egger's regression test, Funnel Plot test, Begg's Rank Correlation and Trim and Fill method--in meta-analysis of studies that report multiple effects. Publication bias, outcome reporting bias, and a combination of these were generated. Egger's…
Descriptors: Statistical Bias, Meta Analysis, Publications, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Joshi, Megha; Pustejovsky, James E.; Beretvas, S. Natasha – Research Synthesis Methods, 2022
The most common and well-known meta-regression models work under the assumption that there is only one effect size estimate per study and that the estimates are independent. However, meta-analytic reviews of social science research often include multiple effect size estimates per primary study, leading to dependence in the estimates. Some…
Descriptors: Meta Analysis, Regression (Statistics), Models, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Beretvas, S. Natasha; Van den Noortgate, Wim – Journal of Experimental Education, 2016
The impact of misspecifying covariance matrices at the second and third levels of the three-level model is evaluated. Results indicate that ignoring existing covariance has no effect on the treatment effect estimate. In addition, the between-case variance estimates are unbiased when covariance is either modeled or ignored. If the research interest…
Descriptors: Hierarchical Linear Modeling, Monte Carlo Methods, Computation, Statistical Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Smith, Lindsey J. Wolff; Beretvas, S. Natasha – Journal of Experimental Education, 2017
Conventional multilevel modeling works well with purely hierarchical data; however, pure hierarchies rarely exist in real datasets. Applied researchers employ ad hoc procedures to create purely hierarchical data. For example, applied educational researchers either delete mobile participants' data from the analysis or identify the student only with…
Descriptors: Student Mobility, Academic Achievement, Simulation, Influences
Peer reviewed Peer reviewed
Direct linkDirect link
Ugille, Maaike; Moeyaert, Mariola; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim – Journal of Experimental Education, 2014
A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect…
Descriptors: Effect Size, Statistical Bias, Sample Size, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Xin; Beretvas, S. Natasha – Structural Equation Modeling: A Multidisciplinary Journal, 2013
This simulation study investigated use of the multilevel structural equation model (MLSEM) for handling measurement error in both mediator and outcome variables ("M" and "Y") in an upper level multilevel mediation model. Mediation and outcome variable indicators were generated with measurement error. Parameter and standard…
Descriptors: Sample Size, Structural Equation Models, Simulation, Multivariate Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Beretvas, S. Natasha; Walker, Cindy M. – Educational and Psychological Measurement, 2012
This study extends the multilevel measurement model to handle testlet-based dependencies. A flexible two-level testlet response model (the MMMT-2 model) for dichotomous items is introduced that permits assessment of differential testlet functioning (DTLF). A distinction is made between this study's conceptualization of DTLF and that of…
Descriptors: Test Bias, Simulation, Test Items, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, HwaYoung; Beretvas, S. Natasha – Educational and Psychological Measurement, 2014
Conventional differential item functioning (DIF) detection methods (e.g., the Mantel-Haenszel test) can be used to detect DIF only across observed groups, such as gender or ethnicity. However, research has found that DIF is not typically fully explained by an observed variable. True sources of DIF may include unobserved, latent variables, such as…
Descriptors: Item Analysis, Factor Structure, Bayesian Statistics, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Beretvas, S. Natasha; Murphy, Daniel L. – Journal of Experimental Education, 2013
The authors assessed correct model identification rates of Akaike's information criterion (AIC), corrected criterion (AICC), consistent AIC (CAIC), Hannon and Quinn's information criterion (HQIC), and Bayesian information criterion (BIC) for selecting among cross-classified random effects models. Performance of default values for the 5…
Descriptors: Models, Goodness of Fit, Evaluation Criteria, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Murphy, Daniel L.; Beretvas, S. Natasha; Pituch, Keenan A. – Structural Equation Modeling: A Multidisciplinary Journal, 2011
This simulation study examined the performance of the curve-of-factors model (COFM) when autocorrelation and growth processes were present in the first-level factor structure. In addition to the standard curve-of factors growth model, 2 new models were examined: one COFM that included a first-order autoregressive autocorrelation parameter, and a…
Descriptors: Sample Size, Simulation, Factor Structure, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Beretvas, S. Natasha; Furlow, Carolyn F. – Structural Equation Modeling: A Multidisciplinary Journal, 2006
Meta-analytic structural equation modeling (MA-SEM) is increasingly being used to assess model-fit for variables' interrelations synthesized across studies. MA-SEM researchers have analyzed synthesized correlation matrices using structural equation modeling (SEM) estimation that is designed for covariance matrices. This can produce incorrect…
Descriptors: Structural Equation Models, Matrices, Statistical Analysis, Synthesis
Peer reviewed Peer reviewed
Direct linkDirect link
Williams, Natasha J.; Beretvas, S. Natasha – Applied Psychological Measurement, 2006
The relationship between the hierarchical generalized linear model (HGLM) and item response theory (IRT) models has been demonstrated for dichotomous items. The current study demonstrated the use of the HGLM for polytomous items (termed PHGLM) for identification of differential item functioning (DIF). First, the algebraic equivalence between…
Descriptors: Identification, Rating Scales, Test Items, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Meyers, Jason L.; Beretvas, S. Natasha – Multivariate Behavioral Research, 2006
Cross-classified random effects modeling (CCREM) is used to model multilevel data from nonhierarchical contexts. These models are widely discussed but infrequently used in social science research. Because little research exists assessing when it is necessary to use CCREM, 2 studies were conducted. A real data set with a cross-classified structure…
Descriptors: Social Science Research, Computation, Models, Data Analysis
Peer reviewed Peer reviewed
Miller, G. Edward; Beretvas, S. Natasha – Journal of Applied Measurement, 2002
Presents empirically based item selection guidelines for moving the cut score on equated tests consisting of "n" dichotomous items calibrated assuming the Rasch model. Derivations of lemmas that underlie the guidelines are provided as well as a simulated example. (SLD)
Descriptors: Cutting Scores, Equated Scores, Item Response Theory, Selection
Peer reviewed Peer reviewed
Klockars, Alan J.; Beretvas, S. Natasha – Journal of Experimental Education, 2001
Compared the Type I error rate and the power to detect differences in slopes and additive treatment effects of analysis of covariance (ANCOVA) and randomized block designs through a Monte Carlo simulation. Results show that the more powerful option in almost all simulations for tests of both slope and means was ANCOVA. (SLD)
Descriptors: Analysis of Covariance, Monte Carlo Methods, Power (Statistics), Research Design
Previous Page | Next Page »
Pages: 1  |  2