Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 7 |
Descriptor
Computer Assisted Testing | 9 |
Instructional Effectiveness | 9 |
Test Items | 9 |
Educational Technology | 5 |
Student Evaluation | 5 |
Adaptive Testing | 4 |
Computer Software | 4 |
Computer System Design | 3 |
Difficulty Level | 3 |
Evaluation Methods | 3 |
Internet | 3 |
More ▼ |
Source
British Journal of… | 2 |
Computers & Education | 2 |
International Journal of… | 1 |
International Journal of… | 1 |
Journal of Computer Assisted… | 1 |
Journal of Research in… | 1 |
Author
Publication Type
Journal Articles | 8 |
Reports - Evaluative | 5 |
Reports - Research | 3 |
Reports - Descriptive | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Elementary Secondary Education | 4 |
Elementary Education | 3 |
Higher Education | 3 |
Postsecondary Education | 2 |
Secondary Education | 2 |
Early Childhood Education | 1 |
Grade 5 | 1 |
High Schools | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
DeBoer, George E.; Quellmalz, Edys S.; Davenport, Jodi L.; Timms, Michael J.; Herrmann-Abell, Cari F.; Buckley, Barbara C.; Jordan, Kevin A.; Huang, Chun-Wei; Flanagan, Jean C. – Journal of Research in Science Teaching, 2014
Online testing holds much promise for assessing students' complex science knowledge and inquiry skills. In the current study, we examined the comparative effectiveness of assessment tasks and test items presented in online modules that used either a static, active, or interactive modality. A total of 1,836 students from the classrooms of 22 middle…
Descriptors: Computer Assisted Testing, Test Items, Interaction, Middle School Students
Klinkenberg, S.; Straatemeier, M.; van der Maas, H. L. J. – Computers & Education, 2011
In this paper we present a model for computerized adaptive practice and monitoring. This model is used in the Maths Garden, a web-based monitoring system, which includes a challenging web environment for children to practice arithmetic. Using a new item response model based on the Elo (1978) rating system and an explicit scoring rule, estimates of…
Descriptors: Test Items, Reaction Time, Scoring, Probability
Sainsbury, Marian; Benton, Tom – British Journal of Educational Technology, 2011
Computer-based testing, or e-assessment, has the potential to deliver immediate results for the benefit of schools. This paper describes a project that aimed to exploit this potential by designing e-assessments where the results were intended for use by teachers in planning the next steps in teaching and learning: low-stakes, formative assessment.…
Descriptors: Test Results, Test Items, Student Evaluation, Early Reading
Chatzopoulou, D. I.; Economides, A. A. – Journal of Computer Assisted Learning, 2010
This paper presents Programming Adaptive Testing (PAT), a Web-based adaptive testing system for assessing students' programming knowledge. PAT was used in two high school programming classes by 73 students. The question bank of PAT is composed of 443 questions. A question is classified in one out of three difficulty levels. In PAT, the levels of…
Descriptors: Student Evaluation, Prior Learning, Programming, High School Students
Jordan, Sally; Mitchell, Tom – British Journal of Educational Technology, 2009
A natural language based system has been used to author and mark short-answer free-text assessment tasks. Students attempt the questions online and are given tailored and relatively detailed feedback on incorrect and incomplete responses, and have the opportunity to repeat the task immediately so as to learn from the feedback provided. The answer…
Descriptors: Feedback (Response), Test Items, Natural Language Processing, Teaching Methods
Koong, Chorng-Shiuh; Wu, Chi-Ying – Computers & Education, 2010
Multiple intelligences, with its hypothesis and implementation, have ascended to a prominent status among the many instructional methodologies. Meanwhile, pedagogical theories and concepts are in need of more alternative and interactive assessments to prove their prevalence (Kinugasa, Yamashita, Hayashi, Tominaga, & Yamasaki, 2005). In general,…
Descriptors: Multiple Intelligences, Test Items, Grading, Programming
Chang, Wen-Chih; Yang, Hsuan-Che; Shih, Timothy K.; Chao, Louis R. – International Journal of Distance Education Technologies, 2009
E-learning provides a convenient and efficient way for learning. Formative assessment not only guides student in instruction and learning, diagnose skill or knowledge gaps, but also measures progress and evaluation. An efficient and convenient e-learning formative assessment system is the key character for e-learning. However, most e-learning…
Descriptors: Electronic Learning, Student Evaluation, Formative Evaluation, Educational Objectives

Clariana, Roy B. – International Journal of Instructional Media, 2004
This investigation considers the instructional effects of color as an over-arching context variable when learning from computer displays. The purpose of this investigation is to examine the posttest retrieval effects of color as a local, extra-item non-verbal lesson context variable for constructed-response versus multiple-choice posttest…
Descriptors: Instructional Effectiveness, Graduate Students, Color, Computer System Design
Siskind, Theresa G.; And Others – 1992
The instructional validity of computer administered tests was studied with a focus on whether differences in test scores and item behavior are a function of instructional mode (computer versus non-computer). In the first of 3 studies, performance test scores for approximately 400 high school students in 1990-91 for tasks accomplished with the…
Descriptors: Comparative Testing, Comprehension, Computer Assisted Instruction, Computer Assisted Testing