Publication Date
In 2025 | 2 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 8 |
Since 2016 (last 10 years) | 8 |
Since 2006 (last 20 years) | 8 |
Descriptor
Algorithms | 13 |
Sample Size | 13 |
Models | 6 |
Item Response Theory | 5 |
Accuracy | 3 |
Maximum Likelihood Statistics | 3 |
Computation | 2 |
Computer Software | 2 |
Data Analysis | 2 |
Educational Assessment | 2 |
Evaluation Methods | 2 |
More ▼ |
Source
Author
Cappelleri, Joseph C. | 1 |
Chiu, Chia-Yi | 1 |
Clauser, Brian E. | 1 |
David Arthur | 1 |
Hua-Hua Chang | 1 |
Huan Kuang | 1 |
Huibin Zhang | 1 |
Jean-Paul Fox | 1 |
Jeffrey R. Harring | 1 |
Ji Seung Yang | 1 |
Jia Quan | 1 |
More ▼ |
Publication Type
Journal Articles | 13 |
Reports - Research | 9 |
Reports - Evaluative | 2 |
Opinion Papers | 1 |
Reports - Descriptive | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Jean-Paul Fox – Journal of Educational and Behavioral Statistics, 2025
Popular item response theory (IRT) models are considered complex, mainly due to the inclusion of a random factor variable (latent variable). The random factor variable represents the incidental parameter problem since the number of parameters increases when including data of new persons. Therefore, IRT models require a specific estimation method…
Descriptors: Sample Size, Item Response Theory, Accuracy, Bayesian Statistics
Zuchao Shen; Walter Leite; Huibin Zhang; Jia Quan; Huan Kuang – Journal of Experimental Education, 2025
When designing cluster-randomized trials (CRTs), one important consideration is determining the proper sample sizes across levels and treatment conditions to cost-efficiently achieve adequate statistical power. This consideration is usually addressed in an optimal design framework by leveraging the cost structures of sampling and optimizing the…
Descriptors: Randomized Controlled Trials, Feasibility Studies, Research Design, Sample Size
David Arthur; Hua-Hua Chang – Journal of Educational and Behavioral Statistics, 2024
Cognitive diagnosis models (CDMs) are the assessment tools that provide valuable formative feedback about skill mastery at both the individual and population level. Recent work has explored the performance of CDMs with small sample sizes but has focused solely on the estimates of individual profiles. The current research focuses on obtaining…
Descriptors: Algorithms, Models, Computation, Cognitive Measurement
Mostafa Hosseinzadeh; Ki Lynn Matlock Cole – Educational and Psychological Measurement, 2024
In real-world situations, multidimensional data may appear on large-scale tests or psychological surveys. The purpose of this study was to investigate the effects of the quantity and magnitude of cross-loadings and model specification on item parameter recovery in multidimensional Item Response Theory (MIRT) models, especially when the model was…
Descriptors: Item Response Theory, Models, Maximum Likelihood Statistics, Algorithms
Kalkan, Ömür Kaya – Measurement: Interdisciplinary Research and Perspectives, 2022
The four-parameter logistic (4PL) Item Response Theory (IRT) model has recently been reconsidered in the literature due to the advances in the statistical modeling software and the recent developments in the estimation of the 4PL IRT model parameters. The current simulation study evaluated the performance of expectation-maximization (EM),…
Descriptors: Comparative Analysis, Sample Size, Test Length, Algorithms
Thakur, Khusbu; Kumar, Vinit – New Review of Academic Librarianship, 2022
A vast amount of published scholarly literature is generated every day. Today, it is one of the biggest challenges for organisations to extract knowledge embedded in published scholarly literature for business and research applications. Application of text mining is gaining popularity among researchers and applications are growing exponentially in…
Descriptors: Information Retrieval, Data Analysis, Research Methodology, Trend Analysis
Xiaying Zheng; Ji Seung Yang; Jeffrey R. Harring – Structural Equation Modeling: A Multidisciplinary Journal, 2022
Measuring change in an educational or psychological construct over time is often achieved by repeatedly administering the same items to the same examinees over time and fitting a second-order latent growth curve model. However, latent growth modeling with full information maximum likelihood (FIML) estimation becomes computationally challenging…
Descriptors: Longitudinal Studies, Data Analysis, Item Response Theory, Structural Equation Models
Wang, Yu; Chiu, Chia-Yi; Köhn, Hans Friedrich – Journal of Educational and Behavioral Statistics, 2023
The multiple-choice (MC) item format has been widely used in educational assessments across diverse content domains. MC items purportedly allow for collecting richer diagnostic information. The effectiveness and economy of administering MC items may have further contributed to their popularity not just in educational assessment. The MC item format…
Descriptors: Multiple Choice Tests, Nonparametric Statistics, Test Format, Educational Assessment

Wirt, Edgar – Journal of Experimental Education, 1987
In negotiating to obtain a sample of records from a computer file, it is important to be able to present a simple program that will produce a representative and valid sample. This article describes two procedures: (1) an interval selection method; and (2) a random numbers file. (JAZ)
Descriptors: Algorithms, Business, Computers, Databases

Tisak, John; Meredith, William – Psychometrika, 1989
A longitudinal factor analysis model that is entirely exploratory is proposed for use with multiple populations. Factorial collapse, period/practice effects, and an invariant and/or stationary factor pattern are allowed. The model is formulated stochastically and implemented via a stage-wise EM algorithm. (TJH)
Descriptors: Algorithms, Factor Analysis, Longitudinal Studies, Maximum Likelihood Statistics

Kiers, Henk A. L.; And Others – Psychometrika, 1992
A modification of the TUCKALS3 algorithm is proposed that handles three-way arrays of order I x J x K for any I. The reduced work space needed for storing data and increased execution speed make the modified algorithm very suitable for use on personal computers. (SLD)
Descriptors: Algorithms, Equations (Mathematics), Least Squares Statistics, Mathematical Models

Cappelleri, Joseph C.; And Others – Evaluation Review, 1994
A statistical power algorithm based on the Fisher Z method is developed for cutoff-based random clinical trials and the single cutoff-point (regression-discontinuity) design that has no randomization. This article quantifies power and sample size estimates for various levels of power and cutoff-based assignment. (Author/SLD)
Descriptors: Algorithms, Cutting Scores, Estimation (Mathematics), Power (Statistics)

Clauser, Brian E.; And Others – Journal of Educational Measurement, 1995
A scoring algorithm for performance assessments is described that is based on expert judgments but requires the rating of only a sample of performances. A regression-based policy capturing procedure was implemented for clinicians evaluating skills of 280 medical students. Results demonstrate the usefulness of the algorithm. (SLD)
Descriptors: Algorithms, Clinical Diagnosis, Computer Simulation, Educational Assessment