NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Samah AlKhuzaey; Floriana Grasso; Terry R. Payne; Valentina Tamma – International Journal of Artificial Intelligence in Education, 2024
Designing and constructing pedagogical tests that contain items (i.e. questions) which measure various types of skills for different levels of students equitably is a challenging task. Teachers and item writers alike need to ensure that the quality of assessment materials is consistent, if student evaluations are to be objective and effective.…
Descriptors: Test Items, Test Construction, Difficulty Level, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Roger Young; Emily Courtney; Alexander Kah; Mariah Wilkerson; Yi-Hsin Chen – Teaching of Psychology, 2025
Background: Multiple-choice item (MCI) assessments are burdensome for instructors to develop. Artificial intelligence (AI, e.g., ChatGPT) can streamline the process without sacrificing quality. The quality of AI-generated MCIs and human experts is comparable. However, whether the quality of AI-generated MCIs is equally good across various domain-…
Descriptors: Item Response Theory, Multiple Choice Tests, Psychology, Textbooks
Peer reviewed Peer reviewed
Direct linkDirect link
MacKenzie D. Sidwell; Landon W. Bonner; Kayla Bates-Brantley; Shengtian Wu – Intervention in School and Clinic, 2024
Oral reading fluency probes are essential for reading assessment, intervention, and progress monitoring. Due to the limited options for choosing oral reading fluency probes, it is important to utilize all available resources such as generative artificial intelligence (AI) like ChatGPT to create oral reading fluency probes. The purpose of this…
Descriptors: Artificial Intelligence, Natural Language Processing, Technology Uses in Education, Oral Reading
Peer reviewed Peer reviewed
Direct linkDirect link
Alammary, Ali – IEEE Transactions on Learning Technologies, 2021
Developing effective assessments is a critical component of quality instruction. Assessments are effective when they are well-aligned with the learning outcomes, can confirm that all intended learning outcomes are attained, and their obtained grades are accurately reflecting the level of student achievement. Developing effective assessments is not…
Descriptors: Outcomes of Education, Alignment (Education), Student Evaluation, Data Analysis
Beghetto, Ronald A. – ECNU Review of Education, 2019
Purpose: This article, based on an invited talk, aims to explore the relationship among large-scale assessments, creativity and personalized learning. Design/Approach/Methods: Starting with the working definition of large-scale assessments, creativity, and personalized learning, this article identified the paradox of combining these three…
Descriptors: Measurement, Creativity, Problem Solving, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Yunjiu, Luo; Wei, Wei; Zheng, Ying – SAGE Open, 2022
Artificial intelligence (AI) technologies have the potential to reduce the workload for the second language (L2) teachers and test developers. We propose two AI distractor-generating methods for creating Chinese vocabulary items: semantic similarity and visual similarity. Semantic similarity refers to antonyms and synonyms, while visual similarity…
Descriptors: Chinese, Vocabulary Development, Artificial Intelligence, Undergraduate Students
Bejar, Issac I.; Yocom, Peter – 1986
This report explores an approach to item development and psychometric modeling which explicitly incorporates knowledge about the mental models used by examinees in the solution of items into a psychometric model that characterize performances on a test, as well as incorporating that knowledge into the item development process. The paper focuses on…
Descriptors: Artificial Intelligence, Computer Assisted Testing, Computer Science, Construct Validity