Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 7 |
Since 2006 (last 20 years) | 10 |
Descriptor
Computer Assisted Testing | 23 |
Models | 23 |
Test Construction | 8 |
Adaptive Testing | 6 |
Test Items | 6 |
Accuracy | 4 |
Comparative Analysis | 4 |
Item Response Theory | 4 |
Scoring | 4 |
Student Evaluation | 4 |
Algorithms | 3 |
More ▼ |
Source
Grantee Submission | 3 |
International Educational… | 3 |
Online Submission | 2 |
AERA Online Paper Repository | 1 |
International Association for… | 1 |
Pearson | 1 |
Author
Doewes, Afrizal | 2 |
Saxena, Akrati | 2 |
Albacete, Patricia | 1 |
Allen, Laura K. | 1 |
Allen, Nancy L. | 1 |
Ashish Gurung | 1 |
Benson, Jeri | 1 |
Bizot, Elizabeth B. | 1 |
Boretz, Harold F. | 1 |
Botarleanu, Robert-Mihai | 1 |
Brooks, Kit | 1 |
More ▼ |
Publication Type
Speeches/Meeting Papers | 23 |
Reports - Research | 12 |
Reports - Evaluative | 8 |
Opinion Papers | 2 |
Reports - Descriptive | 2 |
Information Analyses | 1 |
Numerical/Quantitative Data | 1 |
Education Level
Secondary Education | 3 |
Junior High Schools | 2 |
Middle Schools | 2 |
Adult Education | 1 |
Elementary Secondary Education | 1 |
High Schools | 1 |
Higher Education | 1 |
Audience
Researchers | 1 |
Location
Arkansas | 1 |
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
California Achievement Tests | 1 |
Cattell Culture Fair… | 1 |
Coopersmith Self Esteem… | 1 |
Medical College Admission Test | 1 |
National Assessment of… | 1 |
What Works Clearinghouse Rating
Sami Baral; Eamon Worden; Wen-Chiang Lim; Zhuang Luo; Christopher Santorelli; Ashish Gurung; Neil Heffernan – Grantee Submission, 2024
The effectiveness of feedback in enhancing learning outcomes is well documented within Educational Data Mining (EDM). Various prior research have explored methodologies to enhance the effectiveness of feedback to students in various ways. Recent developments in Large Language Models (LLMs) have extended their utility in enhancing automated…
Descriptors: Automation, Scoring, Computer Assisted Testing, Natural Language Processing
Doewes, Afrizal; Kurdhi, Nughthoh Arfawi; Saxena, Akrati – International Educational Data Mining Society, 2023
Automated Essay Scoring (AES) tools aim to improve the efficiency and consistency of essay scoring by using machine learning algorithms. In the existing research work on this topic, most researchers agree that human-automated score agreement remains the benchmark for assessing the accuracy of machine-generated scores. To measure the performance of…
Descriptors: Essays, Writing Evaluation, Evaluators, Accuracy
Zhang, Mengxue; Heffernan, Neil; Lan, Andrew – International Educational Data Mining Society, 2023
Automated scoring of student responses to open-ended questions, including short-answer questions, has great potential to scale to a large number of responses. Recent approaches for automated scoring rely on supervised learning, i.e., training classifiers or fine-tuning language models on a small number of responses with human-provided score…
Descriptors: Scoring, Computer Assisted Testing, Mathematics Instruction, Mathematics Tests
Botarleanu, Robert-Mihai; Dascalu, Mihai; Allen, Laura K.; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2021
Text summarization is an effective reading comprehension strategy. However, summary evaluation is complex and must account for various factors including the summary and the reference text. This study examines a corpus of approximately 3,000 summaries based on 87 reference texts, with each summary being manually scored on a 4-point Likert scale.…
Descriptors: Computer Assisted Testing, Scoring, Natural Language Processing, Computer Software
Schack, Edna O.; Dueber, David; Thomas, Jonathan Norris; Fisher, Molly H.; Jong, Cindy – AERA Online Paper Repository, 2019
Scoring of teachers' noticing responses is typically burdened with rater bias and reliance upon interrater consensus. The authors sought to make the scoring process more objective, equitable, and generalizable. The development process began with a description of response characteristics for each professional noticing component disconnected from…
Descriptors: Models, Teacher Evaluation, Observation, Bias
Doewes, Afrizal; Saxena, Akrati; Pei, Yulong; Pechenizkiy, Mykola – International Educational Data Mining Society, 2022
In Automated Essay Scoring (AES) systems, many previous works have studied group fairness using the demographic features of essay writers. However, individual fairness also plays an important role in fair evaluation and has not been yet explored. Initialized by Dwork et al., the fundamental concept of individual fairness is "similar people…
Descriptors: Scoring, Essays, Writing Evaluation, Comparative Analysis
Albacete, Patricia; Silliman, Scott; Jordan, Pamela – Grantee Submission, 2017
Intelligent tutoring systems (ITS), like human tutors, try to adapt to student's knowledge level so that the instruction is tailored to their needs. One aspect of this adaptation relies on the ability to have an understanding of the student's initial knowledge so as to build on it, avoiding teaching what the student already knows and focusing on…
Descriptors: Intelligent Tutoring Systems, Knowledge Level, Multiple Choice Tests, Computer Assisted Testing
Shin, Chingwei David; Chien, Yuehmei; Way, Walter Denny – Pearson, 2012
Content balancing is one of the most important components in the computerized adaptive testing (CAT) especially in the K to 12 large scale tests that complex constraint structure is required to cover a broad spectrum of content. The purpose of this study is to compare the weighted penalty model (WPM) and the weighted deviation method (WDM) under…
Descriptors: Computer Assisted Testing, Elementary Secondary Education, Test Content, Models
Schifter, Catherine C.; Carey, Martha – International Association for Development of the Information Society, 2014
The No Child Left Behind (NCLB) legislation spawned a plethora of standardized testing services for all the high stakes testing required by the law. We argue that one-size-fits all assessments disadvantage students who are English Language Learners, in the USA, as well as students with limited economic resources, special needs, and not reading on…
Descriptors: Standardized Tests, Models, Evaluation Methods, Educational Legislation
Nafukho, Fredrick M.; Graham, Carroll M.; Brooks, Kit – Online Submission, 2008
This study was designed to determine the degree of use, level of client satisfaction of professional development and educational services, and to identify suggestions for improving services. Results from a mixed methodology approach indicated moderate to high levels of satisfaction in two program areas and moderate to high levels of…
Descriptors: Participant Satisfaction, Professional Development, Human Resources, Technical Support
Yan, Duanli; Lewis, Charles; Stocking, Martha – 1998
It is unrealistic to suppose that standard item response theory (IRT) models will be appropriate for all new and currently considered computer-based tests. In addition to developing new models, researchers will need to give some attention to the possibility of constructing and analyzing new tests without the aid of strong models. Computerized…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Item Response Theory
Murphy, Patricia A. – 1985
The videodisc, audiodisc, overlapping instruction, mapping, adaptive testing strategies, intricate instructional branching, and complex media selection models are common expectations for quality courseware today. A two team development structure, consisting of an instructional development team and a test development team, is proposed. This paper…
Descriptors: Computer Assisted Instruction, Computer Assisted Testing, Courseware, Models
Hiscox, Michael D. – 1981
This paper argues that the most important role the intelligent videodisc can fulfill is to provide a mechanism for effectively integrating testing and instruction. This integration will produce at least four important benefits: (1) increased learning by the student, (2) more interesting instructional materials, (3) gains in the efficiency of…
Descriptors: Computer Assisted Testing, Diagnostic Teaching, Individual Differences, Instructional Design
Brusilovsky, Peter; Miller, Philip – 1999
This paper provides a technology-based review of World Wide Web-based testing technologies. It suggests an evaluation framework that could be used by practitioners in Web-based education to understand and compare features available in various Web-based testing systems. In order to compare existing options, the life cycle of a question in Web-based…
Descriptors: Computer Assisted Testing, Distance Education, Educational Technology, Evaluation Criteria
Bizot, Elizabeth B.; Goldman, Steven H. – 1994
A study was conducted to evaluate the effects of choice of item response theory (IRT) model, parameter calibration group, starting ability estimate, and stopping criterion on the conversion of an 80-item vocabulary test to computer adaptive format. Three parameter calibration groups were tested: (1) a group of 1,000 high school seniors, (2) a…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Previous Page | Next Page ยป
Pages: 1 | 2