NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 28 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Manolov, Rumen; Solanas, Antonio; Sierra, Vicenta – Journal of Experimental Education, 2020
Changing criterion designs (CCD) are single-case experimental designs that entail a step-by-step approximation of the final level desired for a target behavior. Following a recent review on the desirable methodological features of CCDs, the current text focuses on an analytical challenge: the definition of an objective rule for assessing the…
Descriptors: Research Design, Research Methodology, Data Analysis, Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Kang, Yoonjeong; Hancock, Gregory R. – Journal of Experimental Education, 2017
Structured means analysis is a very useful approach for testing hypotheses about population means on latent constructs. In such models, a z test is most commonly used for testing the statistical significance of the relevant parameter estimates or of the differences between parameter estimates, where a z value is computed based on the asymptotic…
Descriptors: Models, Statistical Analysis, Hypothesis Testing, Statistical Significance
Peer reviewed Peer reviewed
Direct linkDirect link
Matthews, Michael S.; Gentry, Marcia; McCoach, D. Betsy; Worrell, Frank C.; Matthews, Dona; Dixon, Felicia – Journal of Experimental Education, 2008
The authors examined the use of effect sizes across an entire field, satisfying a goal J. Cohen (1997) suggested. They analyzed the extent to which researchers had reported effect sizes in the 5 major journals on gifted education between 1996 and 2005 and compiled data on the types of manuscripts published, whether researchers reported effect…
Descriptors: Gifted, Effect Size, Researchers, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Meyer, J. Patrick; Huynh, Huynh – Journal of Experimental Education, 2008
Federal standards established in 1997 allow respondents to select multiple-race categories. These new standards changed the single-race subgroup definitions that the government has required since 1977. Meta-analysis, research on long-term assessment trends, and other research involving historical comparisons must account for the definitional…
Descriptors: Student Evaluation, Simulation, Definitions, Classification
Peer reviewed Peer reviewed
Huberty, Carl J.; Julian, Mark W. – Journal of Experimental Education, 1995
A subset of a real data set was used to illustrate an ad hoc analysis with missing data on multiple response variables. This strategy was initiated with a complete-case analysis to determine some variables that may be deleted with no loss in effects of interest. (SLD)
Descriptors: Case Studies, Discriminant Analysis, Evaluation Methods, Prediction
Peer reviewed Peer reviewed
Denton, Jon J.; Mabry, M. Patrick, Jr. – Journal of Experimental Education, 1981
This study illustrates the application of causal modeling techniques in testing a conceptual model of teaching using data collected in naturalistic, nonexperimental settings. (Author/GK)
Descriptors: Higher Education, Path Analysis, Research Methodology, Teacher Education
Peer reviewed Peer reviewed
Vacha-Haase, Tammi; Ness, Carin; Nilsson, Johanna; Reetz, David – Journal of Experimental Education, 1999
Reviewed the "Journal of Counseling Psychology,""Psychology & Aging," and "Professional Psychology: Research and Practice" for their practices regarding the reporting of reliability coefficients from 1990 to 1997. In two of the three journals, there was little change in reporting reliability coefficients during…
Descriptors: Reliability, Research Methodology, Research Reports, Scholarly Journals
Peer reviewed Peer reviewed
Zumbo, Bruno D.; And Others – Journal of Experimental Education, 1992
An error in an essential equation within the article by Williams and Zimmerman is corrected, and the algebraic inequalities are translated into questions a researcher can ask about simple or residualized difference scores. Williams and Zimmerman acknowledge the error and note that main conclusions are not affected. (SLD)
Descriptors: Algebra, Comparative Analysis, Equations (Mathematics), Mathematical Models
Peer reviewed Peer reviewed
Asher, William; Hynes, Kevin – Journal of Experimental Education, 1982
An evaluation of open education was shown to produce misleading results due to probable regression phenomena. These questionable results are now spread throughout the literature of education, sociology, and psychology. Researchers are advised to review, not merely summarize, prior articles. (Author/PN)
Descriptors: Data Analysis, Evaluation Methods, Open Education, Regression (Statistics)
Peer reviewed Peer reviewed
Sawilowsky, Shlomo; And Others – Journal of Experimental Education, 1994
A Monte Carlo study considers the use of meta analysis with the Solomon four-group design. Experiment-wise Type I error properties and the relative power properties of Stouffer's Z in the Solomon four-group design are explored. Obstacles to conducting meta analysis in the Solomon design are discussed. (SLD)
Descriptors: Meta Analysis, Monte Carlo Methods, Power (Statistics), Research Design
Peer reviewed Peer reviewed
Snyder, Patricia; Lawson, Stephen – Journal of Experimental Education, 1993
Why methodologists encourage the use of magnitude-of-effect (ME) indices as research interpretation aids is discussed, and different types of ME estimates are reviewed. Correction formulas developed to alternate statistical bias in ME estimates are also discussed, and their effects are illustrated. (SLD)
Descriptors: Data Interpretation, Effect Size, Estimation (Mathematics), Research Methodology
Peer reviewed Peer reviewed
Thompson, Bruce – Journal of Experimental Education, 1993
Three criticisms of conventional uses of structural significance testing are elaborated; and alternatives for augmenting statistical significance tests are reviewed, which include emphasizing effect size, evaluating statistical significance in a sample size context, and evaluating result replicability. Among ways of estimating result…
Descriptors: Effect Size, Estimation (Mathematics), Research Methodology, Research Problems
Peer reviewed Peer reviewed
Thompson, Bruce; Snyder, Patricia A. – Journal of Experimental Education, 1997
The use of three aspects of recommended practice (language use, replicability analyses, and reporting effect sizes) was studied in quantitative reports in "The Journal of Experimental Education" (JXE) for the academic years 1994-95 and 1995-96. Examples of both errors and desirable practices in the use and reporting of statistical…
Descriptors: Effect Size, Language Usage, Research Methodology, Research Reports
Peer reviewed Peer reviewed
Kieffer, Kevin M.; Reese, Robert J.; Thompson, Bruce – Journal of Experimental Education, 2001
Investigated the patterns of statistical usage and reporting in 756 articles published in the "American Educational Research Journal" (AERJ) and the "Journal of Counseling Psychology" (JCP) over 10 years. Evaluated the analytic practices used. Also discusses changes that may be necessary to improve statistics in behavioral…
Descriptors: Educational Research, Psychological Studies, Research Methodology, Scholarly Journals
Peer reviewed Peer reviewed
Kromrey, Jeffrey D.; Foster-Johnson, Lynn – Journal of Experimental Education, 1996
Presents use of the effect size as a descriptive statistic for single-subject research. Discusses the way in which effect sizes augment interpretation of results, and reviews four types of treatment effects with case studies. An appendix includes a sample computer program for computing effect sizes. (SLD)
Descriptors: Case Studies, Computer Software, Data Analysis, Effect Size
Previous Page | Next Page ยป
Pages: 1  |  2