NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Cornelis Potgieter; Xin Qiao; Akihito Kamata; Yusuf Kara – Grantee Submission, 2024
As part of the effort to develop an improved oral reading fluency (ORF) assessment system, Kara et al. (2020) estimated the ORF scores based on a latent variable psychometric model of accuracy and speed for ORF data via a fully Bayesian approach. This study further investigates likelihood-based estimators for the model-derived ORF scores,…
Descriptors: Oral Reading, Reading Fluency, Scores, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Cornelis Potgieter; Xin Qiao; Akihito Kamata; Yusuf Kara – Journal of Educational Measurement, 2024
As part of the effort to develop an improved oral reading fluency (ORF) assessment system, Kara et al. estimated the ORF scores based on a latent variable psychometric model of accuracy and speed for ORF data via a fully Bayesian approach. This study further investigates likelihood-based estimators for the model-derived ORF scores, including…
Descriptors: Oral Reading, Reading Fluency, Scores, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Kohli, Nidhi; Peralta, Yadira; Zopluoglu, Cengiz; Davison, Mark L. – International Journal of Behavioral Development, 2018
Piecewise mixed-effects models are useful for analyzing longitudinal educational and psychological data sets to model segmented change over time. These models offer an attractive alternative to commonly used quadratic and higher-order polynomial models because the coefficients obtained from fitting the model have meaningful substantive…
Descriptors: Hierarchical Linear Modeling, Longitudinal Studies, Maximum Likelihood Statistics, Bayesian Statistics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kilic, Abdullah Faruk; Dogan, Nuri – International Journal of Assessment Tools in Education, 2021
Weighted least squares (WLS), weighted least squares mean-and-variance-adjusted (WLSMV), unweighted least squares mean-and-variance-adjusted (ULSMV), maximum likelihood (ML), robust maximum likelihood (MLR) and Bayesian estimation methods were compared in mixed item response type data via Monte Carlo simulation. The percentage of polytomous items,…
Descriptors: Factor Analysis, Computation, Least Squares Statistics, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Faucon, Louis; Kidzinski, Lukasz; Dillenbourg, Pierre – International Educational Data Mining Society, 2016
Large-scale experiments are often expensive and time consuming. Although Massive Online Open Courses (MOOCs) provide a solid and consistent framework for learning analytics, MOOC practitioners are still reluctant to risk resources in experiments. In this study, we suggest a methodology for simulating MOOC students, which allow estimation of…
Descriptors: Markov Processes, Monte Carlo Methods, Bayesian Statistics, Online Courses
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Woo-yeol; Cho, Sun-Joo – Journal of Educational Measurement, 2017
Cross-level invariance in a multilevel item response model can be investigated by testing whether the within-level item discriminations are equal to the between-level item discriminations. Testing the cross-level invariance assumption is important to understand constructs in multilevel data. However, in most multilevel item response model…
Descriptors: Test Items, Item Response Theory, Item Analysis, Simulation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Pfaffel, Andreas; Spiel, Christiane – Practical Assessment, Research & Evaluation, 2016
Approaches to correcting correlation coefficients for range restriction have been developed under the framework of large sample theory. The accuracy of missing data techniques for correcting correlation coefficients for range restriction has thus far only been investigated with relatively large samples. However, researchers and evaluators are…
Descriptors: Correlation, Sample Size, Error of Measurement, Accuracy
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S. – Educational Research and Reviews, 2016
The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…
Descriptors: Maximum Likelihood Statistics, Computation, Item Response Theory, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Koziol, Natalie A. – Applied Measurement in Education, 2016
Testlets, or groups of related items, are commonly included in educational assessments due to their many logistical and conceptual advantages. Despite their advantages, testlets introduce complications into the theory and practice of educational measurement. Responses to items within a testlet tend to be correlated even after controlling for…
Descriptors: Classification, Accuracy, Comparative Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Trikalinos, Thomas A.; Hoaglin, David C.; Small, Kevin M.; Terrin, Norma; Schmid, Christopher H. – Research Synthesis Methods, 2014
Existing methods for meta-analysis of diagnostic test accuracy focus primarily on a single index test. We propose models for the joint meta-analysis of studies comparing multiple index tests on the same participants in paired designs. These models respect the grouping of data by studies, account for the within-study correlation between the tests'…
Descriptors: Meta Analysis, Diagnostic Tests, Accuracy, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
He, Wei; Reckase, Mark D. – Educational and Psychological Measurement, 2014
For computerized adaptive tests (CATs) to work well, they must have an item pool with sufficient numbers of good quality items. Many researchers have pointed out that, in developing item pools for CATs, not only is the item pool size important but also the distribution of item parameters and practical considerations such as content distribution…
Descriptors: Item Banks, Test Length, Computer Assisted Testing, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
He, Wei; Wolfe, Edward W. – Educational and Psychological Measurement, 2012
In administration of individually administered intelligence tests, items are commonly presented in a sequence of increasing difficulty, and test administration is terminated after a predetermined number of incorrect answers. This practice produces stochastically censored data, a form of nonignorable missing data. By manipulating four factors…
Descriptors: Individual Testing, Intelligence Tests, Test Items, Test Length
Peer reviewed Peer reviewed
Direct linkDirect link
Rudner, Lawrence M. – Practical Assessment, Research & Evaluation, 2009
This paper describes and evaluates the use of measurement decision theory (MDT) to classify examinees based on their item response patterns. The model has a simple framework that starts with the conditional probabilities of examinees in each category or mastery state responding correctly to each item. The presented evaluation investigates: (1) the…
Descriptors: Classification, Scoring, Item Response Theory, Measurement