NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Individuals with Disabilities…8
What Works Clearinghouse Rating
Showing 1 to 15 of 548 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ye Ma; Deborah J. Harris – Educational Measurement: Issues and Practice, 2025
Item position effect (IPE) refers to situations where an item performs differently when it is administered in different positions on a test. The majority of previous research studies have focused on investigating IPE under linear testing. There is a lack of IPE research under adaptive testing. In addition, the existence of IPE might violate Item…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Cheng, Yiling – Measurement: Interdisciplinary Research and Perspectives, 2023
Computerized adaptive testing (CAT) offers an efficient and highly accurate method for estimating examinees' abilities. In this article, the free version of Concerto Software for CAT was reviewed, dividing our evaluation into three sections: software implementation, the Item Response Theory (IRT) features of CAT, and user experience. Overall,…
Descriptors: Computer Software, Computer Assisted Testing, Adaptive Testing, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Fu; Lu, Chang; Cui, Ying; Gao, Yizhu – IEEE Transactions on Learning Technologies, 2023
Learning outcome modeling is a technical underpinning for the successful evaluation of learners' learning outcomes through computer-based assessments. In recent years, collaborative filtering approaches have gained popularity as a technique to model learners' item responses. However, how to model the temporal dependencies between item responses…
Descriptors: Outcomes of Education, Models, Computer Assisted Testing, Cooperation
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, W. Holmes – Educational and Psychological Measurement, 2023
Psychometricians have devoted much research and attention to categorical item responses, leading to the development and widespread use of item response theory for the estimation of model parameters and identification of items that do not perform in the same way for examinees from different population subgroups (e.g., differential item functioning…
Descriptors: Test Bias, Item Response Theory, Computation, Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Uto, Masaki; Aomi, Itsuki; Tsutsumi, Emiko; Ueno, Maomi – IEEE Transactions on Learning Technologies, 2023
In automated essay scoring (AES), essays are automatically graded without human raters. Many AES models based on various manually designed features or various architectures of deep neural networks (DNNs) have been proposed over the past few decades. Each AES model has unique advantages and characteristics. Therefore, rather than using a single-AES…
Descriptors: Prediction, Scores, Computer Assisted Testing, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Suhwa; Kang, Hyeon-Ah – Journal of Educational Measurement, 2023
The study presents multivariate sequential monitoring procedures for examining test-taking behaviors online. The procedures monitor examinee's responses and response times and signal aberrancy as soon as significant change is identifieddetected in the test-taking behavior. The study in particular proposes three schemes to track different…
Descriptors: Test Wiseness, Student Behavior, Item Response Theory, Computer Assisted Testing
Jackson, Kayla – ProQuest LLC, 2023
Prior research highlights the benefits of multimode surveys and best practices for item-by-item (IBI) and matrix-type survey items. Some researchers have explored whether mode differences for online and paper surveys persist for these survey item types. However, no studies discuss measurement invariance when both item types and online modes are…
Descriptors: Test Items, Surveys, Error of Measurement, Item Response Theory
Mark L. Davison; David J. Weiss; Joseph N. DeWeese; Ozge Ersan; Gina Biancarosa; Patrick C. Kennedy – Journal of Educational and Behavioral Statistics, 2023
A tree model for diagnostic educational testing is described along with Monte Carlo simulations designed to evaluate measurement accuracy based on the model. The model is implemented in an assessment of inferential reading comprehension, the Multiple-Choice Online Causal Comprehension Assessment (MOCCA), through a sequential, multidimensional,…
Descriptors: Cognitive Processes, Diagnostic Tests, Measurement, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Meijuan Li; Hongyun Liu; Mengfei Cai; Jianlin Yuan – Education and Information Technologies, 2024
In the human-to-human Collaborative Problem Solving (CPS) test, students' problem-solving process reflects the interdependency among partners. The high interdependency in CPS makes it very sensitive to group composition. For example, the group outcome might be driven by a highly competent group member, so it does not reflect all the individual…
Descriptors: Problem Solving, Computer Assisted Testing, Cooperative Learning, Task Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Esther Ulitzsch; Janine Buchholz; Hyo Jeong Shin; Jonas Bertling; Oliver Lüdtke – Large-scale Assessments in Education, 2024
Common indicator-based approaches to identifying careless and insufficient effort responding (C/IER) in survey data scan response vectors or timing data for aberrances, such as patterns signaling straight lining, multivariate outliers, or signals that respondents rushed through the administered items. Each of these approaches is susceptible to…
Descriptors: Response Style (Tests), Attention, Achievement Tests, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Yang Du; Susu Zhang – Journal of Educational and Behavioral Statistics, 2025
Item compromise has long posed challenges in educational measurement, jeopardizing both test validity and test security of continuous tests. Detecting compromised items is therefore crucial to address this concern. The present literature on compromised item detection reveals two notable gaps: First, the majority of existing methods are based upon…
Descriptors: Item Response Theory, Item Analysis, Bayesian Statistics, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Falk, Carl F.; Feuerstahler, Leah M. – Educational and Psychological Measurement, 2022
Large-scale assessments often use a computer adaptive test (CAT) for selection of items and for scoring respondents. Such tests often assume a parametric form for the relationship between item responses and the underlying construct. Although semi- and nonparametric response functions could be used, there is scant research on their performance in a…
Descriptors: Item Response Theory, Adaptive Testing, Computer Assisted Testing, Nonparametric Statistics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Demir, Seda – Journal of Educational Technology and Online Learning, 2022
The purpose of this research was to evaluate the effect of item pool and selection algorithms on computerized classification testing (CCT) performance in terms of some classification evaluation metrics. For this purpose, 1000 examinees' response patterns using the R package were generated and eight item pools with 150, 300, 450, and 600 items…
Descriptors: Test Items, Item Banks, Mathematics, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Yongze Xu – Educational and Psychological Measurement, 2024
The questionnaire method has always been an important research method in psychology. The increasing prevalence of multidimensional trait measures in psychological research has led researchers to use longer questionnaires. However, questionnaires that are too long will inevitably reduce the quality of the completed questionnaires and the efficiency…
Descriptors: Item Response Theory, Questionnaires, Generalization, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Markus T. Jansen; Ralf Schulze – Educational and Psychological Measurement, 2024
Thurstonian forced-choice modeling is considered to be a powerful new tool to estimate item and person parameters while simultaneously testing the model fit. This assessment approach is associated with the aim of reducing faking and other response tendencies that plague traditional self-report trait assessments. As a result of major recent…
Descriptors: Factor Analysis, Models, Item Analysis, Evaluation Methods
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  37