NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
What Works Clearinghouse Rating
Showing 1 to 15 of 95 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Jeroen D. Mulder; Kim Luijken; Bas B. L. Penning de Vries; Ellen L. Hamaker – Structural Equation Modeling: A Multidisciplinary Journal, 2024
The use of structural equation models for causal inference from panel data is critiqued in the causal inference literature for unnecessarily relying on a large number of parametric assumptions, and alternative methods originating from the potential outcomes framework have been recommended, such as inverse probability weighting (IPW) estimation of…
Descriptors: Structural Equation Models, Time on Task, Time Management, Causal Models
Peer reviewed Peer reviewed
Direct linkDirect link
Rüttenauer, Tobias – Sociological Methods & Research, 2022
Spatial regression models provide the opportunity to analyze spatial data and spatial processes. Yet, several model specifications can be used, all assuming different types of spatial dependence. This study summarizes the most commonly used spatial regression models and offers a comparison of their performance by using Monte Carlo experiments. In…
Descriptors: Models, Monte Carlo Methods, Social Science Research, Data Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Liao, Tim Futing – Sociological Methods & Research, 2022
In common sociological research, income inequality is measured only at the aggregate level. The main purpose of this article is to demonstrate that there is more than meets the eye when inequality is indicated by a single measure. In this article, I introduce an alternative method that evaluates individuals' contributions to inequality as well as…
Descriptors: Sociology, Income, Social Differences, Social Science Research
Peer reviewed Peer reviewed
Direct linkDirect link
Phillippo, David M.; Dias, Sofia; Ades, A. E.; Welton, Nicky J. – Research Synthesis Methods, 2020
Indirect comparisons are used to obtain estimates of relative effectiveness between two treatments that have not been compared in the same randomized controlled trial, but have instead been compared against a common comparator in separate trials. Standard indirect comparisons use only aggregate data, under the assumption that there are no…
Descriptors: Comparative Analysis, Outcomes of Treatment, Patients, Randomized Controlled Trials
Peer reviewed Peer reviewed
Direct linkDirect link
Mulder, J.; Raftery, A. E. – Sociological Methods & Research, 2022
The Schwarz or Bayesian information criterion (BIC) is one of the most widely used tools for model comparison in social science research. The BIC, however, is not suitable for evaluating models with order constraints on the parameters of interest. This article explores two extensions of the BIC for evaluating order-constrained models, one where a…
Descriptors: Models, Social Science Research, Programming Languages, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Won-Chan; Kim, Stella Y.; Choi, Jiwon; Kang, Yujin – Journal of Educational Measurement, 2020
This article considers psychometric properties of composite raw scores and transformed scale scores on mixed-format tests that consist of a mixture of multiple-choice and free-response items. Test scores on several mixed-format tests are evaluated with respect to conditional and overall standard errors of measurement, score reliability, and…
Descriptors: Raw Scores, Item Response Theory, Test Format, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Monroe, Scott – Journal of Educational and Behavioral Statistics, 2019
In item response theory (IRT) modeling, the Fisher information matrix is used for numerous inferential procedures such as estimating parameter standard errors, constructing test statistics, and facilitating test scoring. In principal, these procedures may be carried out using either the expected information or the observed information. However, in…
Descriptors: Item Response Theory, Error of Measurement, Scoring, Inferences
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Chunyan; Kolen, Michael J. – Journal of Educational Measurement, 2018
Smoothing techniques are designed to improve the accuracy of equating functions. The main purpose of this study is to compare seven model selection strategies for choosing the smoothing parameter (C) for polynomial loglinear presmoothing and one procedure for model selection in cubic spline postsmoothing for mixed-format pseudo tests under the…
Descriptors: Comparative Analysis, Accuracy, Models, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Debray, Thomas P. A.; Moons, Karel G. M.; Riley, Richard D. – Research Synthesis Methods, 2018
Small-study effects are a common threat in systematic reviews and may indicate publication bias. Their existence is often verified by visual inspection of the funnel plot. Formal tests to assess the presence of funnel plot asymmetry typically estimate the association between the reported effect size and their standard error, the total sample size,…
Descriptors: Meta Analysis, Comparative Analysis, Publications, Bias
Peer reviewed Peer reviewed
Direct linkDirect link
McCoach, D. Betsy; Rifenbark, Graham G.; Newton, Sarah D.; Li, Xiaoran; Kooken, Janice; Yomtov, Dani; Gambino, Anthony J.; Bellara, Aarti – Journal of Educational and Behavioral Statistics, 2018
This study compared five common multilevel software packages via Monte Carlo simulation: HLM 7, M"plus" 7.4, R (lme4 V1.1-12), Stata 14.1, and SAS 9.4 to determine how the programs differ in estimation accuracy and speed, as well as convergence, when modeling multiple randomly varying slopes of different magnitudes. Simulated data…
Descriptors: Hierarchical Linear Modeling, Computer Software, Comparative Analysis, Monte Carlo Methods
Craven, Laura M.; Trygstad, Peggy J. – Horizon Research, Inc., 2020
The 2018 NSSME+ was based on a national probability sample of schools and computer science, mathematics, and science teachers in grades K-12 in the 50 states and the District of Columbia. The sample was designed to yield national estimates of course offerings and enrollment, teacher background preparation, textbook usage, instructional techniques,…
Descriptors: Comparative Analysis, Beginning Teachers, Experienced Teachers, Mathematics Teachers
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2016
The frequently neglected and often misunderstood relationship between classical test theory and item response theory is discussed for the unidimensional case with binary measures and no guessing. It is pointed out that popular item response models can be directly obtained from classical test theory-based models by accounting for the discrete…
Descriptors: Test Theory, Item Response Theory, Models, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Devlieger, Ines; Mayer, Axel; Rosseel, Yves – Educational and Psychological Measurement, 2016
In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and…
Descriptors: Regression (Statistics), Comparative Analysis, Structural Equation Models, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Gorard, Stephen – International Journal of Research & Method in Education, 2013
Experimental designs involving the randomization of cases to treatment and control groups are powerful and under-used in many areas of social science and social policy. This paper reminds readers of the pre-and post-test, and the post-test only, designs, before explaining briefly how measurement errors propagate according to error theory. The…
Descriptors: Pretests Posttests, Research Design, Comparative Analysis, Data Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Jacob, Robin T.; Goddard, Roger D.; Kim, Eun Sook – Educational Evaluation and Policy Analysis, 2014
It is often difficult and costly to obtain individual-level student achievement data, yet, researchers are frequently reluctant to use school-level achievement data that are widely available from state websites. We argue that public-use aggregate school-level achievement data are, in fact, sufficient to address a wide range of evaluation questions…
Descriptors: Academic Achievement, Data, Information Utilization, Educational Assessment
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7