NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2022
This article develops new closed-form variance expressions for power analyses for commonly used difference-in-differences (DID) and comparative interrupted time series (CITS) panel data estimators. The main contribution is to incorporate variation in treatment timing into the analysis. The power formulas also account for other key design features…
Descriptors: Comparative Analysis, Statistical Analysis, Sample Size, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
von Davier, Matthias; Khorramdel, Lale; He, Qiwei; Shin, Hyo Jeong; Chen, Haiwen – Journal of Educational and Behavioral Statistics, 2019
International large-scale assessments (ILSAs) transitioned from paper-based assessments to computer-based assessments (CBAs) facilitating the use of new item types and more effective data collection tools. This allows implementation of more complex test designs and to collect process and response time (RT) data. These new data types can be used to…
Descriptors: International Assessment, Computer Assisted Testing, Psychometrics, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Rhoads, Christopher H. – Journal of Educational and Behavioral Statistics, 2011
Experimental designs that randomly assign entire clusters of individuals (e.g., schools and classrooms) to treatments are frequently advocated as a way of guarding against contamination of the estimated average causal effect of treatment. However, in the absence of contamination, experimental designs that randomly assign intact clusters to…
Descriptors: Educational Research, Research Design, Effect Size, Experimental Groups
Peer reviewed Peer reviewed
Direct linkDirect link
Haberman, Shelby J.; Sinharay, Sandip – Journal of Educational and Behavioral Statistics, 2010
Most automated essay scoring programs use a linear regression model to predict an essay score from several essay features. This article applied a cumulative logit model instead of the linear regression model to automated essay scoring. Comparison of the performances of the linear regression model and the cumulative logit model was performed on a…
Descriptors: Scoring, Regression (Statistics), Essays, Computer Software
Peer reviewed Peer reviewed
Klockars, Alan J.; Hancock, Gregory R. – Journal of Educational and Behavioral Statistics, 2000
Describes a more powerful version of Scheffe's post hoc multiple comparison procedure with its original derivation (H. Scheffe, 1970). Shows that a more liberal critical value assuming k - 2 between-group degrees of freedom may be used if an omnibus null hypothesis across all groups has been rejected. (Author/SLD)
Descriptors: Comparative Analysis
Peer reviewed Peer reviewed
Bockenholt, Ulf – Journal of Educational and Behavioral Statistics, 2001
Presents a hierarchical framework for the analysis of paired comparison data with three response categories that allow judges to be indifferent or undecided. The approach is viewed as a stochastic representation of the semiorder of R. Luce (1956). Illustrates the usefulness of this multilevel approach through the analysis of a survey study. (SLD)
Descriptors: Comparative Analysis, Surveys
Peer reviewed Peer reviewed
Direct linkDirect link
Jansen, Margo G. H. – Journal of Educational and Behavioral Statistics, 2007
The author considers a latent trait model for the response time on a (set of) pure speed test(s), the multiplicative gamma model (MGM), which is based on the assumption that the test response times are approximately gamma distributed, with known index parameters and scale parameters depending on subject ability and test difficulty parameters. Like…
Descriptors: Reaction Time, Timed Tests, Item Response Theory, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Viechtbauer, Wolfgang – Journal of Educational and Behavioral Statistics, 2007
Standardized effect sizes and confidence intervals thereof are extremely useful devices for comparing results across different studies using scales with incommensurable units. However, exact confidence intervals for standardized effect sizes can usually be obtained only via iterative estimation procedures. The present article summarizes several…
Descriptors: Intervals, Effect Size, Comparative Analysis, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Livingston, Samuel A. – Journal of Educational and Behavioral Statistics, 2006
This article suggests a graphic technique that uses P-P plots to show the extent to which two groups differ on two variables. It can be used even if the variables are measured in completely different, noncomparable units. The comparison is symmetric with respect to the variables and the groups. It reflects the differences between the groups over…
Descriptors: Comparative Analysis, Groups, Differences, Graphs
Peer reviewed Peer reviewed
Thissen, David; Steinberg, Lynne; Kuang, Daniel – Journal of Educational and Behavioral Statistics, 2002
Illustrates that the Benjamini-Hochberg (B-H) procedure to controlling the false positive rate in multiple comparisons is easy to implement using widely available spreadsheet software. Shows that it is feasible to use the B-H procedure to augment or replace the Bonferroni technique. (SLD)
Descriptors: Comparative Analysis, Computer Software
Peer reviewed Peer reviewed
Klockars, Alan J.; Hancock, Gregory R. – Journal of Educational and Behavioral Statistics, 1998
Proposes a method for post hoc contrasts based on subsets of treatment groups, and simulates critical values from the appropriate multivariable F-distribution to be used in place of those associated with Scheffe's test (H. Scheffe, 1953). The proposed method and its critical values provide a uniformly more-powerful post hoc procedure. (SLD)
Descriptors: Analysis of Variance, Comparative Analysis, Simulation
Peer reviewed Peer reviewed
Williams, Valerie S. L.; Jones, Lyle V.; Tukey, John W. – Journal of Educational and Behavioral Statistics, 1999
Illustrates and compares three alternative procedures to adjust significance levels for multiplicity: (1) the traditional Bonferroni technique; (2) a sequential Bonferroni technique; and (3) a sequential approach to control the false discovery rate proposed by Y. Benjamini and Y. Hochberg (1995). Explains advantages of the Benjamini and Hochberg…
Descriptors: Academic Achievement, Comparative Analysis, Error of Measurement, Statistical Significance
Peer reviewed Peer reviewed
Mielke, Paul W., Jr.; Berry, Kenneth J. – Journal of Educational and Behavioral Statistics, 1999
Provides power comparisons for three permutation tests and the Bartlett-Nanda-Pillai trace test (BNP) (M. Bartlett, 1939; D. Nanda, 1950; K. Pillai, 1955) in completely randomized experimental designs with correlated multivariate-dependent variables. The power of the BNP was generally found to be less than that of at least one of the permutation…
Descriptors: Comparative Analysis, Correlation, Equations (Mathematics), Multivariate Analysis
Peer reviewed Peer reviewed
Hedecker, Donald; Gibbons, Robert D.; Waternaux, Christine – Journal of Educational and Behavioral Statistics, 1999
Presents formulas for estimating sample sizes to provide specified levels of power for tests of significance from a longitudinal design allowing for subject attrition. These formulas are derived for a comparison of two groups in terms of single degree-of-freedom contrasts of population means across the study timepoints. (Author/SLD)
Descriptors: Attrition (Research Studies), Comparative Analysis, Estimation (Mathematics), Longitudinal Studies