NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…68
Education Level
Audience
Location
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 68 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kuan-Yu Jin; Thomas Eckes – Educational and Psychological Measurement, 2024
Insufficient effort responding (IER) refers to a lack of effort when answering survey or questionnaire items. Such items typically offer more than two ordered response categories, with Likert-type scales as the most prominent example. The underlying assumption is that the successive categories reflect increasing levels of the latent variable…
Descriptors: Item Response Theory, Test Items, Test Wiseness, Surveys
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2021
The population discrepancy between unstandardized and standardized reliability of homogeneous multicomponent measuring instruments is examined. Within a latent variable modeling framework, it is shown that the standardized reliability coefficient for unidimensional scales can be markedly higher than the corresponding unstandardized reliability…
Descriptors: Test Reliability, Computation, Measures (Individuals), Research Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Menglin Xu; Jessica A. R. Logan – Educational and Psychological Measurement, 2024
Research designs that include planned missing data are gaining popularity in applied education research. These methods have traditionally relied on introducing missingness into data collections using the missing completely at random (MCAR) mechanism. This study assesses whether planned missingness can also be implemented when data are instead…
Descriptors: Research Design, Research Methodology, Monte Carlo Methods, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Yan Xia; Selim Havan – Educational and Psychological Measurement, 2024
Although parallel analysis has been found to be an accurate method for determining the number of factors in many conditions with complete data, its application under missing data is limited. The existing literature recommends that, after using an appropriate multiple imputation method, researchers either apply parallel analysis to every imputed…
Descriptors: Data Interpretation, Factor Analysis, Statistical Inference, Research Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Goretzko, David – Educational and Psychological Measurement, 2022
Determining the number of factors in exploratory factor analysis is arguably the most crucial decision a researcher faces when conducting the analysis. While several simulation studies exist that compare various so-called factor retention criteria under different data conditions, little is known about the impact of missing data on this process.…
Descriptors: Factor Analysis, Research Problems, Data, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Jaki, Thomas; Kim, Minjung; Lamont, Andrea; George, Melissa; Chang, Chi; Feaster, Daniel; Van Horn, M. Lee – Educational and Psychological Measurement, 2019
Regression mixture models are a statistical approach used for estimating heterogeneity in effects. This study investigates the impact of sample size on regression mixture's ability to produce "stable" results. Monte Carlo simulations and analysis of resamples from an application data set were used to illustrate the types of problems that…
Descriptors: Sample Size, Computation, Regression (Statistics), Reliability
Ziying Li; A. Corinne Huggins-Manley; Walter L. Leite; M. David Miller; Eric A. Wright – Educational and Psychological Measurement, 2022
The unstructured multiple-attempt (MA) item response data in virtual learning environments (VLEs) are often from student-selected assessment data sets, which include missing data, single-attempt responses, multiple-attempt responses, and unknown growth ability across attempts, leading to a complex and complicated scenario for using this kind of…
Descriptors: Sequential Approach, Item Response Theory, Data, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Cetin-Berber, Dee Duygu; Sari, Halil Ibrahim; Huggins-Manley, Anne Corinne – Educational and Psychological Measurement, 2019
Routing examinees to modules based on their ability level is a very important aspect in computerized adaptive multistage testing. However, the presence of missing responses may complicate estimation of examinee ability, which may result in misrouting of individuals. Therefore, missing responses should be handled carefully. This study investigated…
Descriptors: Computer Assisted Testing, Adaptive Testing, Error of Measurement, Research Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Trafimow, David – Educational and Psychological Measurement, 2017
There has been much controversy over the null hypothesis significance testing procedure, with much of the criticism centered on the problem of inverse inference. Specifically, p gives the probability of the finding (or one more extreme) given the null hypothesis, whereas the null hypothesis significance testing procedure involves drawing a…
Descriptors: Statistical Inference, Hypothesis Testing, Probability, Intervals
Peer reviewed Peer reviewed
Direct linkDirect link
García-Pérez, Miguel A. – Educational and Psychological Measurement, 2017
Null hypothesis significance testing (NHST) has been the subject of debate for decades and alternative approaches to data analysis have been proposed. This article addresses this debate from the perspective of scientific inquiry and inference. Inference is an inverse problem and application of statistical methods cannot reveal whether effects…
Descriptors: Hypothesis Testing, Statistical Inference, Effect Size, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Tongyun; Jiao, Hong; Macready, George B. – Educational and Psychological Measurement, 2016
The present study investigates different approaches to adding covariates and the impact in fitting mixture item response theory models. Mixture item response theory models serve as an important methodology for tackling several psychometric issues in test development, including the detection of latent differential item functioning. A Monte Carlo…
Descriptors: Item Response Theory, Psychometrics, Test Construction, Monte Carlo Methods
Peer reviewed Peer reviewed
Koslowsky, Meni; Bailit, Howard – Educational and Psychological Measurement, 1975
An equation, introduced by Goodman and Kruskal for obtaining a reliability measure of one item was expanded. This formula determines inter-rater reliability for a series of items across many subjects. The statistic that results is easily interpreted and in many ways is analogous to the conventional reliability for quantitative data. (Author/BJG)
Descriptors: Error Patterns, Reliability, Research Problems
Peer reviewed Peer reviewed
Aiken, Lewis R. – Educational and Psychological Measurement, 1981
This paper presents formulas that can be used to approach the problem of nonresponse in survey research. (Author/AL)
Descriptors: Mathematical Formulas, Research Problems, Surveys
Peer reviewed Peer reviewed
Budescu, David V. – Educational and Psychological Measurement, 1982
An empirical study of the power of the F test in normal populations with variances proportional to the cell means is reported. The results indicate that the power of the test can be approximated by the noncentral F distribution with a modified parameter of noncentrality. (Author/CM)
Descriptors: Hypothesis Testing, Research Problems, Statistical Analysis
Peer reviewed Peer reviewed
Kaiser, Henry F. – Educational and Psychological Measurement, 1981
A revised version of Kaiser's Measure of Sampling Adequacy for factor-analytic data matrices is presented. (Author)
Descriptors: Correlation, Factor Analysis, Research Problems, Sampling
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5