NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20259
Since 202422
Audience
Laws, Policies, & Programs
Assessments and Surveys
Torrance Tests of Creative…1
What Works Clearinghouse Rating
Showing 1 to 15 of 22 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Said Al Faraby; Adiwijaya Adiwijaya; Ade Romadhony – International Journal of Artificial Intelligence in Education, 2024
Questioning plays a vital role in education, directing knowledge construction and assessing students' understanding. However, creating high-level questions requires significant creativity and effort. Automatic question generation is expected to facilitate the generation of not only fluent and relevant but also educationally valuable questions.…
Descriptors: Test Items, Automation, Computer Software, Input Output Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Harold Doran; Testsuhiro Yamada; Ted Diaz; Emre Gonulates; Vanessa Culver – Journal of Educational Measurement, 2025
Computer adaptive testing (CAT) is an increasingly common mode of test administration offering improved test security, better measurement precision, and the potential for shorter testing experiences. This article presents a new item selection algorithm based on a generalized objective function to support multiple types of testing conditions and…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Algorithms
Peer reviewed Peer reviewed
Direct linkDirect link
Kuan-Yu Jin; Wai-Lok Siu – Journal of Educational Measurement, 2025
Educational tests often have a cluster of items linked by a common stimulus ("testlet"). In such a design, the dependencies caused between items are called "testlet effects." In particular, the directional testlet effect (DTE) refers to a recursive influence whereby responses to earlier items can positively or negatively affect…
Descriptors: Models, Test Items, Educational Assessment, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Lei Guo; Wenjie Zhou; Xiao Li – Journal of Educational and Behavioral Statistics, 2024
The testlet design is very popular in educational and psychological assessments. This article proposes a new cognitive diagnosis model, the multiple-choice cognitive diagnostic testlet (MC-CDT) model for tests using testlets consisting of MC items. The MC-CDT model uses the original examinees' responses to MC items instead of dichotomously scored…
Descriptors: Multiple Choice Tests, Diagnostic Tests, Accuracy, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Ö. Emre C. Alagöz; Thorsten Meiser – Educational and Psychological Measurement, 2024
To improve the validity of self-report measures, researchers should control for response style (RS) effects, which can be achieved with IRTree models. A traditional IRTree model considers a response as a combination of distinct decision-making processes, where the substantive trait affects the decision on response direction, while decisions about…
Descriptors: Item Response Theory, Validity, Self Evaluation (Individuals), Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Hongfei Ye; Jian Xu; Danqing Huang; Meng Xie; Jinming Guo; Junrui Yang; Haiwei Bao; Mingzhi Zhang; Ce Zheng – Discover Education, 2025
This study evaluates Large language models (LLMs)' performance on Chinese Postgraduate Medical Entrance Examination (CPGMEE) as well as the hallucinations produced by LLMs and investigate their implications for medical education. We curated 10 trials of mock CPGMEE to evaluate the performances of 4 LLMs (GPT-4.0, ChatGPT, QWen 2.1 and Ernie 4.0).…
Descriptors: College Entrance Examinations, Foreign Countries, Computational Linguistics, Graduate Medical Education
Bryan R. Drost; Char Shryock – Phi Delta Kappan, 2025
Creating assessment questions aligned to standards is a time-consuming task for teachers, but large language models such as ChatGPT can help. Bryan Drost & Char Shryock describe a three-step process for using ChatGPT to create assessments: 1) Ask ChatGPT to break standards into measurable targets. 2) Determine how much time to spend on each…
Descriptors: Artificial Intelligence, Computer Software, Technology Integration, Teaching Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Jila Niknejad; Margaret Bayer – International Journal of Mathematical Education in Science and Technology, 2025
In Spring 2020, the need for redesigning online assessments to preserve integrity became a priority to many educators. Many of us found methods to proctor examinations using Zoom and proctoring software. Such examinations pose their own issues. To reduce the technical difficulties and cost, many Zoom proctored examination sessions were shortened;…
Descriptors: Mathematics Instruction, Mathematics Tests, Computer Assisted Testing, Computer Software
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Musa Adekunle Ayanwale; Mdutshekelwa Ndlovu – Journal of Pedagogical Research, 2024
The COVID-19 pandemic has had a significant impact on high-stakes testing, including the national benchmark tests in South Africa. Current linear testing formats have been criticized for their limitations, leading to a shift towards Computerized Adaptive Testing [CAT]. Assessments with CAT are more precise and take less time. Evaluation of CAT…
Descriptors: Adaptive Testing, Benchmarking, National Competency Tests, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Owen Henkel; Hannah Horne-Robinson; Maria Dyshel; Greg Thompson; Ralph Abboud; Nabil Al Nahin Ch; Baptiste Moreau-Pernet; Kirk Vanacore – Journal of Learning Analytics, 2025
This paper introduces AMMORE, a new dataset of 53,000 math open-response question-answer pairs from Rori, a mathematics learning platform used by middle and high school students in several African countries. Using this dataset, we conducted two experiments to evaluate the use of large language models (LLM) for grading particularly challenging…
Descriptors: Learning Analytics, Learning Management Systems, Mathematics Instruction, Middle School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Kunal Sareen – Innovations in Education and Teaching International, 2024
This study examines the proficiency of Chat GPT, an AI language model, in answering questions on the Situational Judgement Test (SJT), a widely used assessment tool for evaluating the fundamental competencies of medical graduates in the UK. A total of 252 SJT questions from the "Oxford Assess and Progress: Situational Judgement" Test…
Descriptors: Ethics, Decision Making, Artificial Intelligence, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Kyung-Mi O. – Language Testing in Asia, 2024
This study examines the efficacy of artificial intelligence (AI) in creating parallel test items compared to human-made ones. Two test forms were developed: one consisting of 20 existing human-made items and another with 20 new items generated with ChatGPT assistance. Expert reviews confirmed the content parallelism of the two test forms.…
Descriptors: Comparative Analysis, Artificial Intelligence, Computer Software, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Roger Young; Emily Courtney; Alexander Kah; Mariah Wilkerson; Yi-Hsin Chen – Teaching of Psychology, 2025
Background: Multiple-choice item (MCI) assessments are burdensome for instructors to develop. Artificial intelligence (AI, e.g., ChatGPT) can streamline the process without sacrificing quality. The quality of AI-generated MCIs and human experts is comparable. However, whether the quality of AI-generated MCIs is equally good across various domain-…
Descriptors: Item Response Theory, Multiple Choice Tests, Psychology, Textbooks
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Marli Crabtree; Kenneth L. Thompson; Ellen M. Robertson – HAPS Educator, 2024
Research has suggested that changing one's answer on multiple-choice examinations is more likely to lead to positive academic outcomes. This study aimed to further understand the relationship between changing answer selections and item attributes, student performance, and time within a population of 158 first-year medical students enrolled in a…
Descriptors: Anatomy, Science Tests, Medical Students, Medical Education
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mimi Ismail; Ahmed Al - Badri; Said Al - Senaidi – Journal of Education and e-Learning Research, 2025
This study aimed to reveal the differences in individuals' abilities, their standard errors, and the psychometric properties of the test according to the two methods of applying the test (electronic and paper). The descriptive approach was used to achieve the study's objectives. The study sample consisted of 74 male and female students at the…
Descriptors: Achievement Tests, Computer Assisted Testing, Psychometrics, Item Response Theory
Previous Page | Next Page »
Pages: 1  |  2