NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Does not meet standards1
Showing 1,426 to 1,440 of 3,311 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Harring, Jeffrey R.; Weiss, Brandi A.; Hsu, Jui-Chen – Psychological Methods, 2012
Two Monte Carlo simulations were performed to compare methods for estimating and testing hypotheses of quadratic effects in latent variable regression models. The methods considered in the current study were (a) a 2-stage moderated regression approach using latent variable scores, (b) an unconstrained product indicator approach, (c) a latent…
Descriptors: Structural Equation Models, Geometric Concepts, Computation, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Aloe, Ariel M.; Becker, Betsy Jane – Journal of Educational and Behavioral Statistics, 2012
A new effect size representing the predictive power of an independent variable from a multiple regression model is presented. The index, denoted as r[subscript sp], is the semipartial correlation of the predictor with the outcome of interest. This effect size can be computed when multiple predictor variables are included in the regression model…
Descriptors: Meta Analysis, Effect Size, Multiple Regression Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Bing; Dalal, Siddhartha R.; McCaffrey, Daniel F. – Journal of Educational and Behavioral Statistics, 2012
There is widespread interest in using various statistical inference tools as a part of the evaluations for individual teachers and schools. Evaluation systems typically involve classifying hundreds or even thousands of teachers or schools according to their estimated performance. Many current evaluations are largely based on individual estimates…
Descriptors: Statistical Inference, Error of Measurement, Classification, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Austin, Peter C. – Multivariate Behavioral Research, 2012
Researchers are increasingly using observational or nonrandomized data to estimate causal treatment effects. Essential to the production of high-quality evidence is the ability to reduce or minimize the confounding that frequently occurs in observational studies. When using the potential outcome framework to define causal treatment effects, one…
Descriptors: Computation, Regression (Statistics), Statistical Bias, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Rose, Susan A.; Feldman, Judith F.; Jankowski, Jeffery J.; Van Rossem, Ronan – Intelligence, 2012
This study provides the first direct evidence of cognitive continuity for multiple specific information processing abilities from infancy and toddlerhood to pre-adolescence, and provides support for the view that infant abilities form the basis of later childhood abilities. Data from a large sample of children (N = 131) were obtained at five…
Descriptors: Evidence, Structural Equation Models, Intelligence Quotient, Infants
Whiteley, Sonia – Online Submission, 2014
The Total Survey Error (TSE) paradigm provides a framework that supports the effective planning of research, guides decision making about data collection and contextualises the interpretation and dissemination of findings. TSE also allows researchers to systematically evaluate and improve the design and execution of ongoing survey programs and…
Descriptors: Case Studies, Educational Experience, Research Methodology, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Fang; Chalhoub-Deville, Micheline – Language Testing, 2014
Newer statistical procedures are typically introduced to help address the limitations of those already in practice or to deal with emerging research needs. Quantile regression (QR) is introduced in this paper as a relatively new methodology, which is intended to overcome some of the limitations of least squares mean regression (LMR). QR is more…
Descriptors: Regression (Statistics), Language Tests, Language Proficiency, Mathematics Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Ludtke, Oliver; Marsh, Herbert W.; Robitzsch, Alexander; Trautwein, Ulrich – Psychological Methods, 2011
In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data…
Descriptors: Simulation, Educational Psychology, Social Sciences, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Schafer, William D.; Coverdale, Bradley J.; Luxenberg, Harlan; Jin, Ying – Practical Assessment, Research & Evaluation, 2011
There are relatively few examples of quantitative approaches to quality control in educational assessment and accountability contexts. Among the several techniques that are used in other fields, Shewart charts have been found in a few instances to be applicable in educational settings. This paper describes Shewart charts and gives examples of how…
Descriptors: Charts, Quality Control, Educational Assessment, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Kelava, Augustin; Werner, Christina S.; Schermelleh-Engel, Karin; Moosbrugger, Helfried; Zapf, Dieter; Ma, Yue; Cham, Heining; Aiken, Leona S.; West, Stephen G. – Structural Equation Modeling: A Multidisciplinary Journal, 2011
Interaction and quadratic effects in latent variable models have to date only rarely been tested in practice. Traditional product indicator approaches need to create product indicators (e.g., x[superscript 2] [subscript 1], x[subscript 1]x[subscript 4]) to serve as indicators of each nonlinear latent construct. These approaches require the use of…
Descriptors: Simulation, Computation, Evaluation, Predictor Variables
Peer reviewed Peer reviewed
Direct linkDirect link
Sun, Shaojing; Konold, Timothy R.; Fan, Xitao – Journal of Experimental Education, 2011
Interest in testing interaction terms within the latent variable modeling framework has been on the rise in recent years. However, little is known about the influence of nonnormality and model misspecification on such models that involve latent variable interactions. The authors used Mattson's data generation method to control for latent variable…
Descriptors: Structural Equation Models, Interaction, Sample Size, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Guo, Hongwen; Sinharay, Sandip – Journal of Educational and Behavioral Statistics, 2011
Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…
Descriptors: Testing Programs, Measurement, Item Analysis, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Roberts, James S.; Thompson, Vanessa M. – Applied Psychological Measurement, 2011
A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…
Descriptors: Statistical Analysis, Markov Processes, Computation, Monte Carlo Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dimoliatis, Ioannis D. K.; Jelastopulu, Eleni – Universal Journal of Educational Research, 2013
The surgical theatre educational environment measures STEEM, OREEM and mini-STEEM for students (student-STEEM) comprise an up to now disregarded systematic overestimation (OE) due to inaccurate percentage calculation. The aim of the present study was to investigate the magnitude of and suggest a correction for this systematic bias. After an…
Descriptors: Educational Environment, Scores, Grade Prediction, Academic Standards
Peer reviewed Peer reviewed
Direct linkDirect link
Hsieh, Mingchuan – Language Assessment Quarterly, 2013
The Yes/No Angoff and Bookmark method for setting standards on educational assessment are currently two of the most popular standard-setting methods. However, there is no research into the comparability of these two methods in the context of language assessment. This study compared results from the Yes/No Angoff and Bookmark methods as applied to…
Descriptors: Standard Setting (Scoring), Comparative Analysis, Language Tests, Multiple Choice Tests
Pages: 1  |  ...  |  92  |  93  |  94  |  95  |  96  |  97  |  98  |  99  |  100  |  ...  |  221