Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 13 |
Descriptor
Data Analysis | 20 |
Evaluation Methods | 20 |
Goodness of Fit | 20 |
Models | 9 |
Probability | 5 |
Simulation | 5 |
Statistical Analysis | 5 |
Error of Measurement | 4 |
Evaluation Criteria | 4 |
Item Response Theory | 4 |
Research Methodology | 4 |
More ▼ |
Source
Author
Anderson, Edward R. | 1 |
Barnes, Tiffany | 1 |
Ben Domingue | 1 |
Ben Stenhaug | 1 |
Beretvas, S. Natasha | 1 |
Birnbaum, Michael H. | 1 |
Chi, Min | 1 |
Conijn, Judith M. | 1 |
Deal, James E. | 1 |
Edwards, Michael C. | 1 |
Emons, Wilco H. M. | 1 |
More ▼ |
Publication Type
Reports - Research | 14 |
Journal Articles | 13 |
Reports - Evaluative | 3 |
Speeches/Meeting Papers | 3 |
Opinion Papers | 2 |
Reports - Descriptive | 1 |
Education Level
Elementary Secondary Education | 1 |
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Researchers | 1 |
Location
Virginia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Metropolitan Achievement Tests | 1 |
National Assessment of… | 1 |
What Works Clearinghouse Rating
Ben Stenhaug; Ben Domingue – Grantee Submission, 2022
The fit of an item response model is typically conceptualized as whether a given model could have generated the data. We advocate for an alternative view of fit, "predictive fit", based on the model's ability to predict new data. We derive two predictive fit metrics for item response models that assess how well an estimated item response…
Descriptors: Goodness of Fit, Item Response Theory, Prediction, Models
Montoya, Amanda K.; Edwards, Michael C. – Educational and Psychological Measurement, 2021
Model fit indices are being increasingly recommended and used to select the number of factors in an exploratory factor analysis. Growing evidence suggests that the recommended cutoff values for common model fit indices are not appropriate for use in an exploratory factor analysis context. A particularly prominent problem in scale evaluation is the…
Descriptors: Goodness of Fit, Factor Analysis, Cutting Scores, Correlation
Shi, Yang; Schmucker, Robin; Chi, Min; Barnes, Tiffany; Price, Thomas – International Educational Data Mining Society, 2023
Knowledge components (KCs) have many applications. In computing education, knowing the demonstration of specific KCs has been challenging. This paper introduces an entirely data-driven approach for: (1) discovering KCs; and (2) demonstrating KCs, using students' actual code submissions. Our system is based on two expected properties of KCs: (1)…
Descriptors: Computer Science Education, Data Analysis, Programming, Coding
McNeish, Daniel; Harring, Jeffrey R. – Educational and Psychological Measurement, 2017
To date, small sample problems with latent growth models (LGMs) have not received the amount of attention in the literature as related mixed-effect models (MEMs). Although many models can be interchangeably framed as a LGM or a MEM, LGMs uniquely provide criteria to assess global data-model fit. However, previous studies have demonstrated poor…
Descriptors: Growth Models, Goodness of Fit, Error Correction, Sampling
Köse, Alper – Educational Research and Reviews, 2014
The primary objective of this study was to examine the effect of missing data on goodness of fit statistics in confirmatory factor analysis (CFA). For this aim, four missing data handling methods; listwise deletion, full information maximum likelihood, regression imputation and expectation maximization (EM) imputation were examined in terms of…
Descriptors: Data Analysis, Data Collection, Statistical Analysis, Evaluation Methods
Onchiri, Sureiman – Educational Research and Reviews, 2013
Whenever you think you have an idea of how something works, you have a mental model. That is, in effect, a layman's way of talking about having an hypothesis. The hypothesis needs to be tested for how closely it fits reality--and reality is the data collected from an experiment. So the data is collected on the few and compared with a few…
Descriptors: Statistical Analysis, Goodness of Fit, Data Analysis, Statistical Distributions
Lee, HwaYoung; Beretvas, S. Natasha – Educational and Psychological Measurement, 2014
Conventional differential item functioning (DIF) detection methods (e.g., the Mantel-Haenszel test) can be used to detect DIF only across observed groups, such as gender or ethnicity. However, research has found that DIF is not typically fully explained by an observed variable. True sources of DIF may include unobserved, latent variables, such as…
Descriptors: Item Analysis, Factor Structure, Bayesian Statistics, Goodness of Fit
Pelanek, Radek – Journal of Educational Data Mining, 2015
Researchers use many different metrics for evaluation of performance of student models. The aim of this paper is to provide an overview of commonly used metrics, to discuss properties, advantages, and disadvantages of different metrics, to summarize current practice in educational data mining, and to provide guidance for evaluation of student…
Descriptors: Models, Data Analysis, Data Processing, Evaluation Criteria
Longford, Nicholas T. – Journal of Educational and Behavioral Statistics, 2012
Statistical modeling of school effectiveness data was originally motivated by the dissatisfaction with the analysis of (school-leaving) examination results that took no account of the background of the students or regarded each school as an isolated unit of analysis. The application of multilevel analysis was generally regarded as a breakthrough,…
Descriptors: School Effectiveness, Data Analysis, Statistical Analysis, Statistical Studies
Lorenzo-Seva, Urbano; Timmerman, Marieke E.; Kiers, Henk A. L. – Multivariate Behavioral Research, 2011
A common problem in exploratory factor analysis is how many factors need to be extracted from a particular data set. We propose a new method for selecting the number of major common factors: the Hull method, which aims to find a model with an optimal balance between model fit and number of parameters. We examine the performance of the method in an…
Descriptors: Simulation, Research Methodology, Factor Analysis, Item Response Theory
Conijn, Judith M.; Emons, Wilco H. M.; van Assen, Marcel A. L. M.; Sijtsma, Klaas – Multivariate Behavioral Research, 2011
The logistic person response function (PRF) models the probability of a correct response as a function of the item locations. Reise (2000) proposed to use the slope parameter of the logistic PRF as a person-fit measure. He reformulated the logistic PRF model as a multilevel logistic regression model and estimated the PRF parameters from this…
Descriptors: Monte Carlo Methods, Patients, Probability, Item Response Theory
Birnbaum, Michael H. – Psychological Review, 2008
E. Brandstatter, G. Gigerenzer, and R. Hertwig (2006) contended that their priority heuristic, a type of lexicographic semiorder model, is more accurate than cumulative prospect theory (CPT) or transfer of attention exchange (TAX) models in describing risky decisions. However, there are 4 problems with their argument. First, their heuristic is not…
Descriptors: Heuristics, Prediction, Risk, Decision Making
Zhang, Bo; Walker, Cindy M. – Applied Psychological Measurement, 2008
The purpose of this research was to examine the effects of missing data on person-model fit and person trait estimation in tests with dichotomous items. Under the missing-completely-at-random framework, four missing data treatment techniques were investigated including pairwise deletion, coding missing responses as incorrect, hotdeck imputation,…
Descriptors: Item Response Theory, Computation, Goodness of Fit, Test Items

Tellinghuisen, Joel – Journal of Chemical Education, 2005
Several data-analysis problems could be addressed in different ways, ranging from a series of related "local" fitting problems to a single comprehensive "global analysis". The approach has become a powerful one for fitting data to moderately complex models by using library functions and the methods are illustrated for the analysis of HCI-IR…
Descriptors: Goodness of Fit, Data Analysis, Models, Evaluation Methods
Ludlow, Larry H. – 1984
The purpose of this research is to demonstrate that a systematic approach to the graphical analysis of Rasch model residuals can lead to an increased understanding of ordered response data, and that residual patterns do change in predictable ways, and that summary statistics need not be the only piece of evidence for assuring the fit between model…
Descriptors: Data Analysis, Evaluation Methods, Goodness of Fit, Latent Trait Theory
Previous Page | Next Page »
Pages: 1 | 2