Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 2 |
Descriptor
Author
Bennett, Randy Elliot | 1 |
Conejo, Ricardo | 1 |
Gardner, John | 1 |
Guzmán, Eduardo | 1 |
Jacquemin, Daniel | 1 |
Morley, Mary | 1 |
O'Leary, Michael | 1 |
Singley, Mark Kevin | 1 |
Steffen, Manfred | 1 |
Trella, Monica | 1 |
Yuan, Li | 1 |
More ▼ |
Publication Type
Journal Articles | 3 |
Reports - Evaluative | 3 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Gardner, John; O'Leary, Michael; Yuan, Li – Journal of Computer Assisted Learning, 2021
Artificial Intelligence is at the heart of modern society with computers now capable of making process decisions in many spheres of human activity. In education, there has been intensive growth in systems that make formal and informal learning an anytime, anywhere activity for billions of people through online open educational resources and…
Descriptors: Artificial Intelligence, Educational Assessment, Formative Evaluation, Summative Evaluation
Conejo, Ricardo; Guzmán, Eduardo; Trella, Monica – International Journal of Artificial Intelligence in Education, 2016
This article describes the evolution and current state of the domain-independent Siette assessment environment. Siette supports different assessment methods--including classical test theory, item response theory, and computer adaptive testing--and integrates them with multidimensional student models used by intelligent educational systems.…
Descriptors: Automation, Student Evaluation, Intelligent Tutoring Systems, Item Banks

Bennett, Randy Elliot; Steffen, Manfred; Singley, Mark Kevin; Morley, Mary; Jacquemin, Daniel – Journal of Educational Measurement, 1997
Scoring accuracy and item functioning were studied for an open-ended response type test in which correct answers can take many different surface forms. Results with 1,864 graduate school applicants showed automated scoring to approximate the accuracy of multiple-choice scoring. Items functioned similarly to other item types being considered. (SLD)
Descriptors: Adaptive Testing, Automation, College Applicants, Computer Assisted Testing