Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 7 |
Since 2016 (last 10 years) | 17 |
Since 2006 (last 20 years) | 32 |
Descriptor
Computer Assisted Testing | 51 |
Item Response Theory | 51 |
Scoring | 51 |
Adaptive Testing | 27 |
Test Items | 20 |
Scores | 14 |
Test Construction | 14 |
Foreign Countries | 11 |
Equated Scores | 8 |
Models | 8 |
Test Reliability | 8 |
More ▼ |
Source
Author
Davey, Tim | 2 |
Segall, Daniel O. | 2 |
Stocking, Martha L. | 2 |
Uto, Masaki | 2 |
Wise, Steven L. | 2 |
Adams, Raymond J. | 1 |
Ali, Usama S. | 1 |
Aomi, Itsuki | 1 |
Aybek, Eren Can | 1 |
Banse, Holland | 1 |
Bergstrom, Betty A. | 1 |
More ▼ |
Publication Type
Education Level
Higher Education | 4 |
Postsecondary Education | 4 |
Elementary Education | 3 |
Secondary Education | 3 |
Elementary Secondary Education | 2 |
Audience
Researchers | 1 |
Location
Australia | 1 |
Canada | 1 |
China | 1 |
Denmark | 1 |
Germany | 1 |
Malaysia | 1 |
Netherlands | 1 |
Philippines | 1 |
Taiwan | 1 |
United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Center for Epidemiologic… | 1 |
Graduate Record Examinations | 1 |
NEO Personality Inventory | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Uto, Masaki; Aomi, Itsuki; Tsutsumi, Emiko; Ueno, Maomi – IEEE Transactions on Learning Technologies, 2023
In automated essay scoring (AES), essays are automatically graded without human raters. Many AES models based on various manually designed features or various architectures of deep neural networks (DNNs) have been proposed over the past few decades. Each AES model has unique advantages and characteristics. Therefore, rather than using a single-AES…
Descriptors: Prediction, Scores, Computer Assisted Testing, Scoring
Casabianca, Jodi M.; Donoghue, John R.; Shin, Hyo Jeong; Chao, Szu-Fu; Choi, Ikkyu – Journal of Educational Measurement, 2023
Using item-response theory to model rater effects provides an alternative solution for rater monitoring and diagnosis, compared to using standard performance metrics. In order to fit such models, the ratings data must be sufficiently connected in order to estimate rater effects. Due to popular rating designs used in large-scale testing scenarios,…
Descriptors: Item Response Theory, Alternative Assessment, Evaluators, Research Problems
Carol Eckerly; Yue Jia; Paul Jewsbury – ETS Research Report Series, 2022
Testing programs have explored the use of technology-enhanced items alongside traditional item types (e.g., multiple-choice and constructed-response items) as measurement evidence of latent constructs modeled with item response theory (IRT). In this report, we discuss considerations in applying IRT models to a particular type of adaptive testlet…
Descriptors: Computer Assisted Testing, Test Items, Item Response Theory, Scoring
Uto, Masaki; Okano, Masashi – IEEE Transactions on Learning Technologies, 2021
In automated essay scoring (AES), scores are automatically assigned to essays as an alternative to grading by humans. Traditional AES typically relies on handcrafted features, whereas recent studies have proposed AES models based on deep neural networks to obviate the need for feature engineering. Those AES models generally require training on a…
Descriptors: Essays, Scoring, Writing Evaluation, Item Response Theory
Uysal, Ibrahim; Dogan, Nuri – International Journal of Assessment Tools in Education, 2021
Scoring constructed-response items can be highly difficult, time-consuming, and costly in practice. Improvements in computer technology have enabled automated scoring of constructed-response items. However, the application of automated scoring without an investigation of test equating can lead to serious problems. The goal of this study was to…
Descriptors: Computer Assisted Testing, Scoring, Item Response Theory, Test Format
Chan, Kinnie Kin Yee; Bond, Trevor; Yan, Zi – Language Testing, 2023
We investigated the relationship between the scores assigned by an Automated Essay Scoring (AES) system, the Intelligent Essay Assessor (IEA), and grades allocated by trained, professional human raters to English essay writing by instigating two procedures novel to written-language assessment: the logistic transformation of AES raw scores into…
Descriptors: Computer Assisted Testing, Essays, Scoring, Scores
Clements, Douglas H.; Banse, Holland; Sarama, Julie; Tatsuoka, Curtis; Joswick, Candace; Hudyma, Aaron; Van Dine, Douglas W.; Tatsuoka, Kikumi K. – Mathematical Thinking and Learning: An International Journal, 2022
Researchers often develop instruments using correctness scores (and a variety of theories and techniques, such as Item Response Theory) for validation and scoring. Less frequently, observations of children's strategies are incorporated into the design, development, and application of assessments. We conducted individual interviews of 833…
Descriptors: Item Response Theory, Computer Assisted Testing, Test Items, Mathematics Tests
Seifried, Jürgen; Brandt, Steffen; Kögler, Kristina; Rausch, Andreas – Cogent Education, 2020
Problem-solving competence is an important requirement in back offices across different industries. Thus, the assessment of problem-solving competence has become an important issue in learning and instruction in vocational and professional contexts. We developed a computer-based low-stakes assessment of problem-solving competence in the domain of…
Descriptors: Foreign Countries, Vocational Education, Student Evaluation, Computer Assisted Testing
Scoular, Claire; Care, Esther – Educational Assessment, 2019
Recent educational and psychological research has highlighted shifting workplace requirements and change required to equip the emerging workforce with skills for the 21st century. The emergence of these highlights the issues, and drives the importance, of new methods of assessment. This study addresses some of the issues by describing a scoring…
Descriptors: Cooperation, Problem Solving, Scoring, 21st Century Skills
Wind, Stefanie A.; Wolfe, Edward W.; Engelhard, George, Jr.; Foltz, Peter; Rosenstein, Mark – International Journal of Testing, 2018
Automated essay scoring engines (AESEs) are becoming increasingly popular as an efficient method for performance assessments in writing, including many language assessments that are used worldwide. Before they can be used operationally, AESEs must be "trained" using machine-learning techniques that incorporate human ratings. However, the…
Descriptors: Computer Assisted Testing, Essay Tests, Writing Evaluation, Scoring
He, Tung-hsien – SAGE Open, 2019
This study employed a mixed-design approach and the Many-Facet Rasch Measurement (MFRM) framework to investigate whether rater bias occurred between the onscreen scoring (OSS) mode and the paper-based scoring (PBS) mode. Nine human raters analytically marked scanned scripts and paper scripts using a six-category (i.e., six-criterion) rating…
Descriptors: Computer Assisted Testing, Scoring, Item Response Theory, Essays
Wang, Keyin – ProQuest LLC, 2017
The comparison of item-level computerized adaptive testing (CAT) and multistage adaptive testing (MST) has been researched extensively (e.g., Kim & Plake, 1993; Luecht et al., 1996; Patsula, 1999; Jodoin, 2003; Hambleton & Xing, 2006; Keng, 2008; Zheng, 2012). Various CAT and MST designs have been investigated and compared under the same…
Descriptors: Comparative Analysis, Computer Assisted Testing, Adaptive Testing, Test Items
Aybek, Eren Can; Demirtasli, R. Nukhet – International Journal of Research in Education and Science, 2017
This article aims to provide a theoretical framework for computerized adaptive tests (CAT) and item response theory models for polytomous items. Besides that, it aims to introduce the simulation and live CAT software to the related researchers. Computerized adaptive test algorithm, assumptions of item response theory models, nominal response…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Test Items
Dallas, Andrew – ProQuest LLC, 2014
This dissertation examined the overall effects of routing and scoring within a computer adaptive multi-stage framework (ca-MST). Testing in a ca-MST environment has become extremely popular in the testing industry. Testing companies enjoy its efficiency benefits as compared to traditionally linear testing and its quality-control features over…
Descriptors: Scoring, Computer Assisted Testing, Adaptive Testing, Item Response Theory
Wolf, Mikyung Kim; Guzman-Orth, Danielle; Lopez, Alexis; Castellano, Katherine; Himelfarb, Igor; Tsutagawa, Fred S. – Educational Assessment, 2016
This article investigates ways to improve the assessment of English learner students' English language proficiency given the current movement of creating next-generation English language proficiency assessments in the Common Core era. In particular, this article discusses the integration of scaffolding strategies, which are prevalently utilized as…
Descriptors: English Language Learners, Scaffolding (Teaching Technique), Language Tests, Language Proficiency