NotesFAQContact Us
Collection
Advanced
Search Tips
Education Level
Audience
Researchers3
Location
Sweden1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 21 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ting Dai; Yang Du; Jennifer Cromley; Tia Fechter; Frank Nelson – Journal of Experimental Education, 2024
Simple matrix sampling planned missing (SMS PD) design, introduce missing data patterns that lead to covariances between variables that are not jointly observed, and create difficulties for analyses other than mean and variance estimations. Based on prior research, we adopted a new multigroup confirmatory factor analysis (CFA) approach to handle…
Descriptors: Research Problems, Research Design, Data, Matrices
Peer reviewed Peer reviewed
Direct linkDirect link
Jia, Yuane; Konold, Timothy – Journal of Experimental Education, 2021
Traditional observed variable multilevel models for evaluating indirect effects are limited by their inability to quantify measurement and sampling error. They are further restricted by being unable to fully separate within- and between-level effects without bias. Doubly latent models reduce these biases by decomposing the observed within-level…
Descriptors: Hierarchical Linear Modeling, Educational Environment, Aggression, Bullying
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Daniel Y.; Harring, Jeffrey R.; Stapleton, Laura M. – Journal of Experimental Education, 2019
Respondent attrition is a common problem in national longitudinal panel surveys. To make full use of the data, weights are provided to account for attrition. Weight adjustments are based on sampling design information and data from the base year; information from subsequent waves is typically not utilized. Alternative methods to address bias from…
Descriptors: Longitudinal Studies, Research Methodology, Research Problems, Data Analysis
Soysal, Sümeyra; Arikan, Çigdem Akin; Inal, Hatice – Online Submission, 2016
This study aims to investigate the effect of methods to deal with missing data on item difficulty estimations under different test length conditions and sampling sizes. In this line, a data set including 10, 20 and 40 items with 100 and 5000 sampling size was prepared. Deletion process was applied at the rates of 5%, 10% and 20% under conditions…
Descriptors: Research Problems, Data Analysis, Item Response Theory, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Cooper, Barry; Glaesser, Judith – International Journal of Social Research Methodology, 2016
Ragin's Qualitative Comparative Analysis (QCA) is often used with small to medium samples where the researcher has good case knowledge. Employing it to analyse large survey datasets, without in-depth case knowledge, raises new challenges. We present ways of addressing these challenges. We first report a single QCA result from a configurational…
Descriptors: Social Science Research, Robustness (Statistics), Educational Sociology, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, W. Holmes – Journal of Experimental Education, 2016
Multivariate analysis of variance (MANOVA) is widely used in educational research to compare means on multiple dependent variables across groups. Researchers faced with the problem of missing data often use multiple imputation of values in place of the missing observations. This study compares the performance of 2 methods for combining p values in…
Descriptors: Multivariate Analysis, Educational Research, Error of Measurement, Research Problems
Peer reviewed Peer reviewed
Direct linkDirect link
McNeish, Daniel – Review of Educational Research, 2017
In education research, small samples are common because of financial limitations, logistical challenges, or exploratory studies. With small samples, statistical principles on which researchers rely do not hold, leading to trust issues with model estimates and possible replication issues when scaling up. Researchers are generally aware of such…
Descriptors: Models, Statistical Analysis, Sampling, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Lai, Mark H. C.; Kwok, Oi-man – Journal of Experimental Education, 2015
Educational researchers commonly use the rule of thumb of "design effect smaller than 2" as the justification of not accounting for the multilevel or clustered structure in their data. The rule, however, has not yet been systematically studied in previous research. In the present study, we generated data from three different models…
Descriptors: Educational Research, Research Design, Cluster Grouping, Statistical Data
Peer reviewed Peer reviewed
Thompson, Bruce – Educational and Psychological Measurement, 1995
Three problems with stepwise research methods are explored. Computer packages may use incorrect degrees of freedom in stepwise computations. In addition, stepwise methods do not identify correctly the best variable set of a given size. A third problem is that stepwise methods tend to capitalize on sampling error. (SLD)
Descriptors: Discriminant Analysis, Error of Measurement, Research Methodology, Research Problems
Peer reviewed Peer reviewed
Carver, Ronald P. – Journal of Experimental Education, 1993
Four things are recommended to minimize the influence or importance of statistical significance testing. Researchers must not neglect to add "statistical" to significant and could interpret results before giving p-values. Effect sizes should be reported with measures of sampling error, and replication can be built into the design. (SLD)
Descriptors: Educational Researchers, Effect Size, Error of Measurement, Research Methodology
Lockridge, Jewel – 1997
Researchers persist in using stepwise regression in spite of problems with this approach. As noted by B. Thompson (1995), three problems accompany the use of stepwise applications. The first is that computer packages may use incorrect degrees of freedom in their computations, resulting in a greater likelihood of obtaining a spurious statistical…
Descriptors: Computer Oriented Programs, Error of Measurement, Predictor Variables, Research Methodology
Peer reviewed Peer reviewed
Carroll, Robert M.; Nordholm, Lena A. – Educational and Psychological Measurement, 1975
Statistics used to estimate the population correlation ratio were reviewed and evaluated. The sampling distributions of Kelly's and Hays' statistics were studied empirically by computer simulation within the context of a three level one-way fixed effects analysis of variance design. (Author/RC)
Descriptors: Analysis of Variance, Bias, Comparative Analysis, Correlation
Peer reviewed Peer reviewed
Loo, Robert – Perceptual and Motor Skills, 1983
In examining considerations in determining sample sizes for factor analyses, attention was given to the effects of outliers; the standard error of correlations, and their effect on factor structure; sample heterogeneity; and the misuse of rules of thumb for sample sizes. (Author)
Descriptors: Correlation, Error of Measurement, Evaluation Methods, Factor Analysis
Moore, James D., Jr. – 1996
The serious problems associated with the use of stepwise methods are well documented. Various authors have leveled scathing criticisms against the use of stepwise techniques, yet it is not uncommon to find these methods continually employed in educational and psychological research. The three main problems with stepwise techniques are: (1)…
Descriptors: Computer Software, Discriminant Analysis, Educational Research, Error of Measurement
Folsom, Ralph E., Jr. – 1977
Beginning with the planning stages of the National Assessment of Educational Progress (NAEP), careful attention has been given to the design of efficient probability sampling methods for the selection of class-age respondents and the assignment of test packages. With these methods, it is possible for NAEP researchers to make relatively precise…
Descriptors: Educational Assessment, Elementary Secondary Education, Error of Measurement, National Competency Tests
Previous Page | Next Page »
Pages: 1  |  2