Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 25 |
Since 2016 (last 10 years) | 49 |
Since 2006 (last 20 years) | 91 |
Descriptor
Classification | 153 |
Statistical Analysis | 42 |
Item Response Theory | 36 |
Models | 36 |
Test Items | 33 |
Comparative Analysis | 31 |
Accuracy | 27 |
Computation | 20 |
Simulation | 20 |
Correlation | 19 |
Probability | 19 |
More ▼ |
Source
Educational and Psychological… | 154 |
Author
Liu, Ren | 5 |
McQuitty, Louis L. | 5 |
Finch, W. Holmes | 4 |
Hong, Sehee | 4 |
Birenbaum, Menucha | 3 |
Marcoulides, George A. | 3 |
Raykov, Tenko | 3 |
Wilson, Mark | 3 |
Yarnold, Paul R. | 3 |
Ames, Allison | 2 |
Chung, Hyewon | 2 |
More ▼ |
Publication Type
Journal Articles | 133 |
Reports - Research | 93 |
Reports - Evaluative | 32 |
Reports - Descriptive | 9 |
Opinion Papers | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Audience
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Su, Hsu-Lin; Chen, Po-Hsi – Educational and Psychological Measurement, 2023
The multidimensional mixture data structure exists in many test (or inventory) conditions. Heterogeneity also relatively exists in populations. Still, some researchers are interested in deciding to which subpopulation a participant belongs according to the participant's factor pattern. Thus, in this study, we proposed three analysis procedures…
Descriptors: Data Analysis, Correlation, Classification, Factor Structure
Wang, Yan; Kim, Eunsook; Yi, Zhiyao – Educational and Psychological Measurement, 2022
Latent profile analysis (LPA) identifies heterogeneous subgroups based on continuous indicators that represent different dimensions. It is a common practice to measure each dimension using items, create composite or factor scores for each dimension, and use these scores as indicators of profiles in LPA. In this case, measurement models for…
Descriptors: Robustness (Statistics), Profiles, Statistical Analysis, Classification
Gonzalez, Oscar – Educational and Psychological Measurement, 2023
When scores are used to make decisions about respondents, it is of interest to estimate classification accuracy (CA), the probability of making a correct decision, and classification consistency (CC), the probability of making the same decision across two parallel administrations of the measure. Model-based estimates of CA and CC computed from the…
Descriptors: Classification, Accuracy, Intervals, Probability
Weese, James D.; Turner, Ronna C.; Ames, Allison; Crawford, Brandon; Liang, Xinya – Educational and Psychological Measurement, 2022
A simulation study was conducted to investigate the heuristics of the SIBTEST procedure and how it compares with ETS classification guidelines used with the Mantel-Haenszel procedure. Prior heuristics have been used for nearly 25 years, but they are based on a simulation study that was restricted due to computer limitations and that modeled item…
Descriptors: Test Bias, Heuristics, Classification, Statistical Analysis
Sanaz Nazari; Walter L. Leite; A. Corinne Huggins-Manley – Educational and Psychological Measurement, 2024
Social desirability bias (SDB) is a common threat to the validity of conclusions from responses to a scale or survey. There is a wide range of person-fit statistics in the literature that can be employed to detect SDB. In addition, machine learning classifiers, such as logistic regression and random forest, have the potential to distinguish…
Descriptors: Social Desirability, Bias, Artificial Intelligence, Identification
Rios, Joseph A. – Educational and Psychological Measurement, 2022
The presence of rapid guessing (RG) presents a challenge to practitioners in obtaining accurate estimates of measurement properties and examinee ability. In response to this concern, researchers have utilized response times as a proxy of RG and have attempted to improve parameter estimation accuracy by filtering RG responses using popular scoring…
Descriptors: Guessing (Tests), Classification, Accuracy, Computation
Holmes Finch, W. – Educational and Psychological Measurement, 2021
Social scientists are frequently interested in identifying latent subgroups within the population, based on a set of observed variables. One of the more common tools for this purpose is latent class analysis (LCA), which models a scenario involving k finite and mutually exclusive classes within the population. An alternative approach to this…
Descriptors: Group Membership, Classification, Sample Size, Models
Weese, James D.; Turner, Ronna C.; Liang, Xinya; Ames, Allison; Crawford, Brandon – Educational and Psychological Measurement, 2023
A study was conducted to implement the use of a standardized effect size and corresponding classification guidelines for polytomous data with the POLYSIBTEST procedure and compare those guidelines with prior recommendations. Two simulation studies were included. The first identifies new unstandardized test heuristics for classifying moderate and…
Descriptors: Effect Size, Classification, Guidelines, Statistical Analysis
Kroc, Edward; Olvera Astivia, Oscar L. – Educational and Psychological Measurement, 2022
Setting cutoff scores is one of the most common practices when using scales to aid in classification purposes. This process is usually done univariately where each optimal cutoff value is decided sequentially, subscale by subscale. While it is widely known that this process necessarily reduces the probability of "passing" such a test,…
Descriptors: Multivariate Analysis, Cutting Scores, Classification, Measurement
Mangino, Anthony A.; Finch, W. Holmes – Educational and Psychological Measurement, 2021
Oftentimes in many fields of the social and natural sciences, data are obtained within a nested structure (e.g., students within schools). To effectively analyze data with such a structure, multilevel models are frequently employed. The present study utilizes a Monte Carlo simulation to compare several novel multilevel classification algorithms…
Descriptors: Prediction, Hierarchical Linear Modeling, Classification, Bayesian Statistics
Cassiday, Kristina R.; Cho, Youngmi; Harring, Jeffrey R. – Educational and Psychological Measurement, 2021
Simulation studies involving mixture models inevitably aggregate parameter estimates and other output across numerous replications. A primary issue that arises in these methodological investigations is label switching. The current study compares several label switching corrections that are commonly used when dealing with mixture models. A growth…
Descriptors: Probability, Models, Simulation, Mathematics
Sen, Sedat; Cohen, Allan S. – Educational and Psychological Measurement, 2023
The purpose of this study was to examine the effects of different data conditions on item parameter recovery and classification accuracy of three dichotomous mixture item response theory (IRT) models: the Mix1PL, Mix2PL, and Mix3PL. Manipulated factors in the simulation included the sample size (11 different sample sizes from 100 to 5000), test…
Descriptors: Sample Size, Item Response Theory, Accuracy, Classification
Ilagan, Michael John; Falk, Carl F. – Educational and Psychological Measurement, 2023
Administering Likert-type questionnaires to online samples risks contamination of the data by malicious computer-generated random responses, also known as bots. Although nonresponsivity indices (NRIs) such as person-total correlations or Mahalanobis distance have shown great promise to detect bots, universal cutoff values are elusive. An initial…
Descriptors: Likert Scales, Questionnaires, Artificial Intelligence, Identification
Jang, Yoona; Hong, Sehee – Educational and Psychological Measurement, 2023
The purpose of this study was to evaluate the degree of classification quality in the basic latent class model when covariates are either included or are not included in the model. To accomplish this task, Monte Carlo simulations were conducted in which the results of models with and without a covariate were compared. Based on these simulations,…
Descriptors: Classification, Models, Prediction, Sample Size
Henninger, Mirka; Debelak, Rudolf; Strobl, Carolin – Educational and Psychological Measurement, 2023
To detect differential item functioning (DIF), Rasch trees search for optimal split-points in covariates and identify subgroups of respondents in a data-driven way. To determine whether and in which covariate a split should be performed, Rasch trees use statistical significance tests. Consequently, Rasch trees are more likely to label small DIF…
Descriptors: Item Response Theory, Test Items, Effect Size, Statistical Significance