NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Assessments and Surveys
Program for International…2
What Works Clearinghouse Rating
Showing 1 to 15 of 28 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Cheng, Yiling – Measurement: Interdisciplinary Research and Perspectives, 2023
Computerized adaptive testing (CAT) offers an efficient and highly accurate method for estimating examinees' abilities. In this article, the free version of Concerto Software for CAT was reviewed, dividing our evaluation into three sections: software implementation, the Item Response Theory (IRT) features of CAT, and user experience. Overall,…
Descriptors: Computer Software, Computer Assisted Testing, Adaptive Testing, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ince Araci, F. Gul; Tan, Seref – International Journal of Assessment Tools in Education, 2022
Computerized Adaptive Testing (CAT) is a beneficial test technique that decreases the number of items that need to be administered by taking items in accordance with individuals' own ability levels. After the CAT applications were constructed based on the unidimensional Item Response Theory (IRT), Multidimensional CAT (MCAT) applications have…
Descriptors: Adaptive Testing, Computer Assisted Testing, Simulation, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Fuchimoto, Kazuma; Ishii, Takatoshi; Ueno, Maomi – IEEE Transactions on Learning Technologies, 2022
Educational assessments often require uniform test forms, for which each test form has equivalent measurement accuracy but with a different set of items. For uniform test assembly, an important issue is the increase of the number of assembled uniform tests. Although many automatic uniform test assembly methods exist, the maximum clique algorithm…
Descriptors: Simulation, Efficiency, Test Items, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Uto, Masaki; Okano, Masashi – IEEE Transactions on Learning Technologies, 2021
In automated essay scoring (AES), scores are automatically assigned to essays as an alternative to grading by humans. Traditional AES typically relies on handcrafted features, whereas recent studies have proposed AES models based on deep neural networks to obviate the need for feature engineering. Those AES models generally require training on a…
Descriptors: Essays, Scoring, Writing Evaluation, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mimi Ismail; Ahmed Al - Badri; Said Al - Senaidi – Journal of Education and e-Learning Research, 2025
This study aimed to reveal the differences in individuals' abilities, their standard errors, and the psychometric properties of the test according to the two methods of applying the test (electronic and paper). The descriptive approach was used to achieve the study's objectives. The study sample consisted of 74 male and female students at the…
Descriptors: Achievement Tests, Computer Assisted Testing, Psychometrics, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Choi, Youn-Jeng; Asilkalkan, Abdullah – Measurement: Interdisciplinary Research and Perspectives, 2019
About 45 R packages to analyze data using item response theory (IRT) have been developed over the last decade. This article introduces these 45 R packages with their descriptions and features. It also describes possible advanced IRT models using R packages, as well as dichotomous and polytomous IRT models, and R packages that contain applications…
Descriptors: Item Response Theory, Data Analysis, Computer Software, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Jue; Engelhard, George, Jr. – Educational Measurement: Issues and Practice, 2019
In this digital ITEMS module, Dr. Jue Wang and Dr. George Engelhard Jr. describe the Rasch measurement framework for the construction and evaluation of new measures and scales. From a theoretical perspective, they discuss the historical and philosophical perspectives on measurement with a focus on Rasch's concept of specific objectivity and…
Descriptors: Item Response Theory, Evaluation Methods, Measurement, Goodness of Fit
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Aybek, Eren Can; Demirtasli, R. Nukhet – International Journal of Research in Education and Science, 2017
This article aims to provide a theoretical framework for computerized adaptive tests (CAT) and item response theory models for polytomous items. Besides that, it aims to introduce the simulation and live CAT software to the related researchers. Computerized adaptive test algorithm, assumptions of item response theory models, nominal response…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
O'Keeffe, Cormac – E-Learning and Digital Media, 2017
International Large Scale Assessments have been producing data about educational attainment for over 60 years. More recently however, these assessments as tests have become digitally and computationally complex and increasingly rely on the calculative work performed by algorithms. In this article I first consider the coordination of relations…
Descriptors: Achievement Tests, Foreign Countries, Secondary School Students, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Ishii, Takatoshi; Songmuang, Pokpong; Ueno, Maomi – IEEE Transactions on Learning Technologies, 2014
Educational assessments occasionally require uniform test forms for which each test form comprises a different set of items, but the forms meet equivalent test specifications (i.e., qualities indicated by test information functions based on item response theory). We propose two maximum clique algorithms (MCA) for uniform test form assembly. The…
Descriptors: Simulation, Efficiency, Test Items, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Choi, Seung W.; Podrabsky, Tracy; McKinney, Natalie – Applied Psychological Measurement, 2012
Computerized adaptive testing (CAT) enables efficient and flexible measurement of latent constructs. The majority of educational and cognitive measurement constructs are based on dichotomous item response theory (IRT) models. An integral part of developing various components of a CAT system is conducting simulations using both known and empirical…
Descriptors: Computer Assisted Testing, Adaptive Testing, Computer Software, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Liu, Chen-Wei; Wu, Shiu-Lien – Applied Psychological Measurement, 2013
The random-threshold generalized unfolding model (RTGUM) was developed by treating the thresholds in the generalized unfolding model as random effects rather than fixed effects to account for the subjective nature of the selection of categories in Likert items. The parameters of the new model can be estimated with the JAGS (Just Another Gibbs…
Descriptors: Computer Assisted Testing, Adaptive Testing, Models, Bayesian Statistics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ozyurt, Hacer; Ozyurt, Ozcan; Baki, Adnan – Turkish Online Journal of Distance Education, 2012
Assessment is one of the methods used for evaluation of the learning outputs. Nowadays, use of adaptive assessment systems estimating ability level and abilities of the students is becoming widespread instead of traditional assessment systems. Adaptive assessment system evaluates students not only according to their marks that they take in test…
Descriptors: Computer System Design, Intelligent Tutoring Systems, Computer Software, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Klinkenberg, S.; Straatemeier, M.; van der Maas, H. L. J. – Computers & Education, 2011
In this paper we present a model for computerized adaptive practice and monitoring. This model is used in the Maths Garden, a web-based monitoring system, which includes a challenging web environment for children to practice arithmetic. Using a new item response model based on the Elo (1978) rating system and an explicit scoring rule, estimates of…
Descriptors: Test Items, Reaction Time, Scoring, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Lau, Paul Ngee Kiong; Lau, Sie Hoe; Hong, Kian Sam; Usop, Hasbee – Educational Technology & Society, 2011
The number right (NR) method, in which students pick one option as the answer, is the conventional method for scoring multiple-choice tests that is heavily criticized for encouraging students to guess and failing to credit partial knowledge. In addition, computer technology is increasingly used in classroom assessment. This paper investigates the…
Descriptors: Guessing (Tests), Multiple Choice Tests, Computers, Scoring
Previous Page | Next Page ยป
Pages: 1  |  2