Publication Date
In 2025 | 119 |
Since 2024 | 444 |
Since 2021 (last 5 years) | 1617 |
Since 2016 (last 10 years) | 2945 |
Since 2006 (last 20 years) | 4843 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
Practitioners | 181 |
Researchers | 145 |
Teachers | 120 |
Policymakers | 37 |
Administrators | 36 |
Students | 15 |
Counselors | 9 |
Parents | 4 |
Media Staff | 3 |
Support Staff | 3 |
Location
Australia | 166 |
United Kingdom | 152 |
Turkey | 124 |
China | 114 |
Germany | 107 |
Canada | 105 |
Spain | 91 |
Taiwan | 88 |
Netherlands | 72 |
Iran | 68 |
United States | 67 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Meets WWC Standards without Reservations | 4 |
Meets WWC Standards with or without Reservations | 4 |
Does not meet standards | 5 |
Selcuk Acar; Peter Organisciak; Denis Dumas – Journal of Creative Behavior, 2025
In this three-study investigation, we applied various approaches to score drawings created in response to both Form A and Form B of the Torrance Tests of Creative Thinking-Figural (broadly TTCT-F) as well as the Multi-Trial Creative Ideation task (MTCI). We focused on TTCT-F in Study 1, and utilizing a random forest classifier, we achieved 79% and…
Descriptors: Scoring, Computer Assisted Testing, Models, Correlation
Harold Doran; Testsuhiro Yamada; Ted Diaz; Emre Gonulates; Vanessa Culver – Journal of Educational Measurement, 2025
Computer adaptive testing (CAT) is an increasingly common mode of test administration offering improved test security, better measurement precision, and the potential for shorter testing experiences. This article presents a new item selection algorithm based on a generalized objective function to support multiple types of testing conditions and…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Algorithms
Peter Baldwin; Victoria Yaneva; Kai North; Le An Ha; Yiyun Zhou; Alex J. Mechaber; Brian E. Clauser – Journal of Educational Measurement, 2025
Recent developments in the use of large-language models have led to substantial improvements in the accuracy of content-based automated scoring of free-text responses. The reported accuracy levels suggest that automated systems could have widespread applicability in assessment. However, before they are used in operational testing, other aspects of…
Descriptors: Artificial Intelligence, Scoring, Computational Linguistics, Accuracy
Chioma Udeozor; Fernando Russo Abegão; Jarka Glassey – British Journal of Educational Technology, 2024
Digital games (DGs) have the potential to immerse learners in simulated real-world environments that foster contextualised and active learning experiences. These also offer opportunities for performance assessments by providing an environment for students to carry out tasks requiring the application of knowledge and skills learned in the…
Descriptors: Educational Technology, Computer Assisted Testing, Game Based Learning, Test Construction
Karyssa A. Courey; Frederick L. Oswald; Steven A. Culpepper – Practical Assessment, Research & Evaluation, 2024
Historically, organizational researchers have fully embraced frequentist statistics and null hypothesis significance testing (NHST). Bayesian statistics is an underused alternative paradigm offering numerous benefits for organizational researchers and practitioners: e.g., accumulating direct evidence for the null hypothesis (vs. 'fail to reject…
Descriptors: Bayesian Statistics, Statistical Distributions, Researchers, Institutional Research
Pauline Frizelle; Ana Buckley; Tricia Biancone; Anna Ceroni; Darren Dahly; Paul Fletcher; Dorothy V. M. Bishop; Cristina McKean – Journal of Child Language, 2024
This study reports on the feasibility of using the Test of Complex Syntax- Electronic (TECS-E), as a self-directed app, to measure sentence comprehension in children aged 4 to 5 ½ years old; how testing apps might be adapted for effective independent use; and agreement levels between face-to-face supported computerized and independent computerized…
Descriptors: Language Processing, Computer Software, Language Tests, Syntax
Chen, Fu; Lu, Chang; Cui, Ying; Gao, Yizhu – IEEE Transactions on Learning Technologies, 2023
Learning outcome modeling is a technical underpinning for the successful evaluation of learners' learning outcomes through computer-based assessments. In recent years, collaborative filtering approaches have gained popularity as a technique to model learners' item responses. However, how to model the temporal dependencies between item responses…
Descriptors: Outcomes of Education, Models, Computer Assisted Testing, Cooperation
Henderson, Michael; Chung, Jennifer; Awdry, Rebecca; Ashford, Cliff; Bryant, Mike; Mundy, Matthew; Ryan, Kris – International Journal for Educational Integrity, 2023
Discussions around assessment integrity often focus on the exam conditions and the motivations and values of those who cheated in comparison with those who did not. We argue that discourse needs to move away from a binary representation of cheating. Instead, we propose that the conversation may be more productive and more impactful by focusing on…
Descriptors: College Students, Computer Assisted Testing, Cheating, Ambiguity (Semantics)
Victoria Crisp; Sylvia Vitello; Abdullah Ali Khan; Heather Mahy; Sarah Hughes – Research Matters, 2025
This research set out to enhance our understanding of the exam techniques and types of written annotations or markings that learners may wish to use to support their thinking when taking digital multiple-choice exams. Additionally, we aimed to further explore issues around the factors that contribute to learners writing less rough work and…
Descriptors: Computer Assisted Testing, Test Format, Multiple Choice Tests, Notetaking
Student Approaches to Generating Mathematical Examples: Comparing E-Assessment and Paper-Based Tasks
George Kinnear; Paola Iannone; Ben Davies – Educational Studies in Mathematics, 2025
Example-generation tasks have been suggested as an effective way to both promote students' learning of mathematics and assess students' understanding of concepts. E-assessment offers the potential to use example-generation tasks with large groups of students, but there has been little research on this approach so far. Across two studies, we…
Descriptors: Mathematics Skills, Learning Strategies, Skill Development, Student Evaluation

Andreea Dutulescu; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Assessing the difficulty of reading comprehension questions is crucial to educational methodologies and language understanding technologies. Traditional methods of assessing question difficulty rely frequently on human judgments or shallow metrics, often failing to accurately capture the intricate cognitive demands of answering a question. This…
Descriptors: Difficulty Level, Reading Tests, Test Items, Reading Comprehension
Guher Gorgun; Okan Bulut – Education and Information Technologies, 2024
In light of the widespread adoption of technology-enhanced learning and assessment platforms, there is a growing demand for innovative, high-quality, and diverse assessment questions. Automatic Question Generation (AQG) has emerged as a valuable solution, enabling educators and assessment developers to efficiently produce a large volume of test…
Descriptors: Computer Assisted Testing, Test Construction, Test Items, Automation
Yang Zhen; Xiaoyan Zhu – Educational and Psychological Measurement, 2024
The pervasive issue of cheating in educational tests has emerged as a paramount concern within the realm of education, prompting scholars to explore diverse methodologies for identifying potential transgressors. While machine learning models have been extensively investigated for this purpose, the untapped potential of TabNet, an intricate deep…
Descriptors: Artificial Intelligence, Models, Cheating, Identification
Jian Zhao; Elaine Chapman; Peyman G. P. Sabet – Education Research and Perspectives, 2024
The launch of ChatGPT and the rapid proliferation of generative AI (GenAI) have brought transformative changes to education, particularly in the field of assessment. This has prompted a fundamental rethinking of traditional assessment practices, presenting both opportunities and challenges in evaluating student learning. While numerous studies…
Descriptors: Literature Reviews, Artificial Intelligence, Evaluation Methods, Student Evaluation
Finch, W. Holmes – Educational and Psychological Measurement, 2023
Psychometricians have devoted much research and attention to categorical item responses, leading to the development and widespread use of item response theory for the estimation of model parameters and identification of items that do not perform in the same way for examinees from different population subgroups (e.g., differential item functioning…
Descriptors: Test Bias, Item Response Theory, Computation, Methods