NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Job Training Partnership Act…1
What Works Clearinghouse Rating
Showing 1 to 15 of 51 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ting Dai; Yang Du; Jennifer Cromley; Tia Fechter; Frank Nelson – Journal of Experimental Education, 2024
Simple matrix sampling planned missing (SMS PD) design, introduce missing data patterns that lead to covariances between variables that are not jointly observed, and create difficulties for analyses other than mean and variance estimations. Based on prior research, we adopted a new multigroup confirmatory factor analysis (CFA) approach to handle…
Descriptors: Research Problems, Research Design, Data, Matrices
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wang, Jianjun; Ma, Xin – Athens Journal of Education, 2019
This rejoinder keeps the original focus on statistical computing pertaining to the correlation of student achievement between mathematics and science from the Trend in Mathematics and Science Study (TIMSS). Albeit the availability of student performance data in TIMSS and the emphasis of the inter-subject connection in the Next Generation Science…
Descriptors: Scores, Correlation, Achievement Tests, Elementary Secondary Education
Peer reviewed Peer reviewed
Direct linkDirect link
McNeish, Daniel – Review of Educational Research, 2017
In education research, small samples are common because of financial limitations, logistical challenges, or exploratory studies. With small samples, statistical principles on which researchers rely do not hold, leading to trust issues with model estimates and possible replication issues when scaling up. Researchers are generally aware of such…
Descriptors: Models, Statistical Analysis, Sampling, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Lai, Mark H. C.; Kwok, Oi-man – Journal of Experimental Education, 2015
Educational researchers commonly use the rule of thumb of "design effect smaller than 2" as the justification of not accounting for the multilevel or clustered structure in their data. The rule, however, has not yet been systematically studied in previous research. In the present study, we generated data from three different models…
Descriptors: Educational Research, Research Design, Cluster Grouping, Statistical Data
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Citkowicz, Martyna; Hedges, Larry V. – Society for Research on Educational Effectiveness, 2013
In some instances, intentionally or not, study designs are such that there is clustering in one group but not in the other. This paper describes methods for computing effect size estimates and their variances when there is clustering in only one group and the analysis has not taken that clustering into account. The authors provide the effect size…
Descriptors: Multivariate Analysis, Effect Size, Sampling, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Phillips, Gary W. – Applied Measurement in Education, 2015
This article proposes that sampling design effects have potentially huge unrecognized impacts on the results reported by large-scale district and state assessments in the United States. When design effects are unrecognized and unaccounted for they lead to underestimating the sampling error in item and test statistics. Underestimating the sampling…
Descriptors: State Programs, Sampling, Research Design, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Marsh, Herbert W.; Ludtke, Oliver; Nagengast, Benjamin; Trautwein, Ulrich; Morin, Alexandre J. S.; Abduljabbar, Adel S.; Koller, Olaf – Educational Psychologist, 2012
Classroom context and climate are inherently classroom-level (L2) constructs, but applied researchers sometimes--inappropriately--represent them by student-level (L1) responses in single-level models rather than more appropriate multilevel models. Here we focus on important conceptual issues (distinctions between climate and contextual variables;…
Descriptors: Foreign Countries, Classroom Environment, Educational Research, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Gugiu, P. Cristian – Journal of MultiDisciplinary Evaluation, 2007
The constraints of conducting evaluations in real-world settings often necessitate the implementation of less than ideal designs. Unfortunately, the standard method for estimating the precision of a result (i.e., confidence intervals [CI]) cannot be used for evaluative conclusions that are derived from multiple indicators, measures, and data…
Descriptors: Measurement, Evaluation Methods, Evaluation Problems, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Xuyang; Tomblin, J. Bruce – Journal of Speech, Language, and Hearing Research, 2003
This tutorial is concerned with examining how regression to the mean influences research findings in longitudinal studies of clinical populations. In such studies participants are often obtained because of performance that deviates systematically from the population mean and are then subsequently studied with respect to change in the trait used…
Descriptors: Longitudinal Studies, Regression (Statistics), Error of Measurement, Research Design
Kish, Leslie – 1989
A brief, practical overview of "design effects" (DEFFs) is presented for users of the results of sample surveys. The overview is intended to help such users to determine how and when to use DEFFs and to compute them correctly. DEFFs are needed only for inferential statistics, not for descriptive statistics. When the selections for…
Descriptors: Computer Software, Error of Measurement, Mathematical Models, Research Design
Wang, Lin; Fan, Xitao – 1997
Standard statistical methods are used to analyze data that is assumed to be collected using a simple random sampling scheme. These methods, however, tend to underestimate variance when the data is collected with a cluster design, which is often found in educational survey research. The purposes of this paper are to demonstrate how a cluster design…
Descriptors: Cluster Analysis, Educational Research, Error of Measurement, Estimation (Mathematics)
Peer reviewed Peer reviewed
Forsyth, Robert A. – Applied Psychological Measurement, 1978
This note shows that, under conditions specified by Levin and Subkoviak (TM 503 420), it is not necessary to specify the reliabilities of observed scores when comparing completely randomized designs with randomized block designs. Certain errors in their illustrative example are also discussed. (Author/CTM)
Descriptors: Analysis of Variance, Error of Measurement, Hypothesis Testing, Reliability
Peer reviewed Peer reviewed
Levin, Joel R.; Subkoviak, Michael J. – Applied Psychological Measurement, 1978
Comments (TM 503 706) on an earlier article (TM 503 420) concerning the comparison of the completely randomized design and the randomized block design are acknowledged and appreciated. In addition, potentially misleading notions arising from these comments are addressed and clarified. (See also TM 503 708). (Author/CTM)
Descriptors: Analysis of Variance, Error of Measurement, Hypothesis Testing, Reliability
Peer reviewed Peer reviewed
Forsyth, Robert A. – Applied Psychological Measurement, 1978
This note continues the discussion of earlier articles (TM 503 420, TM 503 706, and TM 503 707), comparing the completely randomized design with the randomized block design. (CTM)
Descriptors: Analysis of Variance, Error of Measurement, Hypothesis Testing, Reliability
Betebenner, Damian W. – 1998
The zeitgeist for reform in education precipitated a number of changes in assessment. Among these are performance assessments, sometimes linked to "high stakes" accountability decisions. In some instances, the trustworthiness of these decisions is based on variance components and error variances derived through generalizability theory.…
Descriptors: Accountability, Educational Change, Error of Measurement, Generalizability Theory
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4