Publication Date
| In 2026 | 0 |
| Since 2025 | 6 |
| Since 2022 (last 5 years) | 46 |
| Since 2017 (last 10 years) | 107 |
| Since 2007 (last 20 years) | 264 |
Descriptor
| Computer Assisted Testing | 368 |
| Models | 368 |
| Test Items | 84 |
| Adaptive Testing | 76 |
| Foreign Countries | 75 |
| Item Response Theory | 71 |
| Test Construction | 68 |
| Student Evaluation | 56 |
| Evaluation Methods | 55 |
| Scoring | 52 |
| Comparative Analysis | 51 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Location
| Australia | 9 |
| United Kingdom (England) | 9 |
| Germany | 8 |
| Netherlands | 6 |
| Spain | 6 |
| Israel | 5 |
| Japan | 5 |
| Asia | 4 |
| Singapore | 4 |
| South Africa | 4 |
| Taiwan | 4 |
| More ▼ | |
Laws, Policies, & Programs
| Every Student Succeeds Act… | 2 |
| No Child Left Behind Act 2001 | 2 |
| American Recovery and… | 1 |
| Family Educational Rights and… | 1 |
| Health Insurance Portability… | 1 |
| Individuals with Disabilities… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Yang Zhen; Xiaoyan Zhu – Educational and Psychological Measurement, 2024
The pervasive issue of cheating in educational tests has emerged as a paramount concern within the realm of education, prompting scholars to explore diverse methodologies for identifying potential transgressors. While machine learning models have been extensively investigated for this purpose, the untapped potential of TabNet, an intricate deep…
Descriptors: Artificial Intelligence, Models, Cheating, Identification
Yan Jin; Jason Fan – Language Assessment Quarterly, 2023
In language assessment, AI technology has been incorporated in task design, assessment delivery, automated scoring of performance-based tasks, score reporting, and provision of feedback. AI technology is also used for collecting and analyzing performance data in language assessment validation. Research has been conducted to investigate the…
Descriptors: Language Tests, Artificial Intelligence, Computer Assisted Testing, Test Format
Carol Eckerly; Yue Jia; Paul Jewsbury – ETS Research Report Series, 2022
Testing programs have explored the use of technology-enhanced items alongside traditional item types (e.g., multiple-choice and constructed-response items) as measurement evidence of latent constructs modeled with item response theory (IRT). In this report, we discuss considerations in applying IRT models to a particular type of adaptive testlet…
Descriptors: Computer Assisted Testing, Test Items, Item Response Theory, Scoring
Nixi Wang – ProQuest LLC, 2022
Measurement errors attributable to cultural issues are complex and challenging for educational assessments. We need assessment tests sensitive to the cultural heterogeneity of populations, and psychometric methods appropriate to address fairness and equity concerns. Built on the research of culturally responsive assessment, this dissertation…
Descriptors: Culturally Relevant Education, Testing, Equal Education, Validity
Fu Chen; Chang Lu; Ying Cui – Education and Information Technologies, 2024
Successful computer-based assessments for learning greatly rely on an effective learner modeling approach to analyze learner data and evaluate learner behaviors. In addition to explicit learning performance (i.e., product data), the process data logged by computer-based assessments provide a treasure trove of information about how learners solve…
Descriptors: Computer Assisted Testing, Problem Solving, Learning Analytics, Learning Processes
Wang, Wenhao; Kingston, Neal M.; Davis, Marcia H.; Tiemann, Gail C.; Tonks, Stephen; Hock, Michael – Educational Measurement: Issues and Practice, 2021
Adaptive tests are more efficient than fixed-length tests through the use of item response theory; adaptive tests also present students questions that are tailored to their proficiency level. Although the adaptive algorithm is straightforward, developing a multidimensional computer adaptive test (MCAT) measure is complex. Evidence-centered design…
Descriptors: Evidence Based Practice, Reading Motivation, Adaptive Testing, Computer Assisted Testing
Selcuk Acar; Peter Organisciak; Denis Dumas – Journal of Creative Behavior, 2025
In this three-study investigation, we applied various approaches to score drawings created in response to both Form A and Form B of the Torrance Tests of Creative Thinking-Figural (broadly TTCT-F) as well as the Multi-Trial Creative Ideation task (MTCI). We focused on TTCT-F in Study 1, and utilizing a random forest classifier, we achieved 79% and…
Descriptors: Scoring, Computer Assisted Testing, Models, Correlation
Chen, Fu; Lu, Chang; Cui, Ying; Gao, Yizhu – IEEE Transactions on Learning Technologies, 2023
Learning outcome modeling is a technical underpinning for the successful evaluation of learners' learning outcomes through computer-based assessments. In recent years, collaborative filtering approaches have gained popularity as a technique to model learners' item responses. However, how to model the temporal dependencies between item responses…
Descriptors: Outcomes of Education, Models, Computer Assisted Testing, Cooperation
Student Approaches to Generating Mathematical Examples: Comparing E-Assessment and Paper-Based Tasks
George Kinnear; Paola Iannone; Ben Davies – Educational Studies in Mathematics, 2025
Example-generation tasks have been suggested as an effective way to both promote students' learning of mathematics and assess students' understanding of concepts. E-assessment offers the potential to use example-generation tasks with large groups of students, but there has been little research on this approach so far. Across two studies, we…
Descriptors: Mathematics Skills, Learning Strategies, Skill Development, Student Evaluation
Mo Zhang; Paul Deane; Andrew Hoang; Hongwen Guo; Chen Li – Educational Measurement: Issues and Practice, 2025
In this paper, we describe two empirical studies that demonstrate the application and modeling of keystroke logs in writing assessments. We illustrate two different approaches of modeling differences in writing processes: analysis of mean differences in handcrafted theory-driven features and use of large language models to identify stable personal…
Descriptors: Writing Tests, Computer Assisted Testing, Keyboarding (Data Entry), Writing Processes
Ulrike Padó; Yunus Eryilmaz; Larissa Kirschner – International Journal of Artificial Intelligence in Education, 2024
Short-Answer Grading (SAG) is a time-consuming task for teachers that automated SAG models have long promised to make easier. However, there are three challenges for their broad-scale adoption: A technical challenge regarding the need for high-quality models, which is exacerbated for languages with fewer resources than English; a usability…
Descriptors: Grading, Automation, Test Format, Computer Assisted Testing
Tan, Hongye; Wang, Chong; Duan, Qinglong; Lu, Yu; Zhang, Hu; Li, Ru – Interactive Learning Environments, 2023
Automatic short answer grading (ASAG) is a challenging task that aims to predict a score for a given student response. Previous works on ASAG mainly use nonneural or neural methods. However, the former depends on handcrafted features and is limited by its inflexibility and high cost, and the latter ignores global word cooccurrence in a corpus and…
Descriptors: Automation, Grading, Computer Assisted Testing, Graphs
Uto, Masaki; Aomi, Itsuki; Tsutsumi, Emiko; Ueno, Maomi – IEEE Transactions on Learning Technologies, 2023
In automated essay scoring (AES), essays are automatically graded without human raters. Many AES models based on various manually designed features or various architectures of deep neural networks (DNNs) have been proposed over the past few decades. Each AES model has unique advantages and characteristics. Therefore, rather than using a single-AES…
Descriptors: Prediction, Scores, Computer Assisted Testing, Scoring
Aditya Shah; Ajay Devmane; Mehul Ranka; Prathamesh Churi – Education and Information Technologies, 2024
Online learning has grown due to the advancement of technology and flexibility. Online examinations measure students' knowledge and skills. Traditional question papers include inconsistent difficulty levels, arbitrary question allocations, and poor grading. The suggested model calibrates question paper difficulty based on student performance to…
Descriptors: Computer Assisted Testing, Difficulty Level, Grading, Test Construction
Patel, Nirmal; Sharma, Aditya; Shah, Tirth; Lomas, Derek – Journal of Educational Data Mining, 2021
Process Analysis is an emerging approach to discover meaningful knowledge from temporal educational data. The study presented in this paper shows how we used Process Analysis methods on the National Assessment of Educational Progress (NAEP) test data for modeling and predicting student test-taking behavior. Our process-oriented data exploration…
Descriptors: Learning Analytics, National Competency Tests, Evaluation Methods, Prediction

Peer reviewed
Direct link
