Descriptor
| Computer Assisted Testing | 10 |
| Methods Research | 10 |
| Comparative Analysis | 4 |
| Comparative Testing | 4 |
| Test Format | 4 |
| Test Items | 4 |
| Educational Technology | 3 |
| Educational Testing | 3 |
| Elementary School Students | 3 |
| Printed Materials | 3 |
| Student Evaluation | 3 |
| More ▼ | |
Source
| Journal of Technology,… | 3 |
| Journal of Educational… | 2 |
| Applied Psychological… | 1 |
| Educational and Psychological… | 1 |
| International Journal of… | 1 |
| Journal of Educational… | 1 |
| National Center for Research… | 1 |
Author
| Allen, Nancy | 1 |
| Anderson, Paul S. | 1 |
| Ariel, Adelaide | 1 |
| Bennett, Randy Elliott | 1 |
| Chuang, San-hui | 1 |
| Chung, Gregory K. W. K. | 1 |
| Dexter, Sara L. | 1 |
| Doering, Aaron | 1 |
| Drake, Samuel | 1 |
| Enders, Craig K | 1 |
| Green, Sylvia | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 9 |
| Reports - Research | 7 |
| Reports - Evaluative | 3 |
Education Level
| Elementary Education | 2 |
| Elementary Secondary Education | 2 |
| Higher Education | 2 |
| Grade 8 | 1 |
| Postsecondary Education | 1 |
Audience
Location
| Netherlands | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Graduate Record Examinations | 1 |
What Works Clearinghouse Rating
Ariel, Adelaide; Veldkamp, Bernard P.; van der Linden, Wim J. – Journal of Educational Measurement, 2004
Preventing items in adaptive testing from being over- or underexposed is one of the main problems in computerized adaptive testing. Though the problem of overexposed items can be solved using a probabilistic item-exposure control method, such methods are unable to deal with the problem of underexposed items. Using a system of rotating item pools,…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Banks, Test Construction
O'Neil, Harold F.; Chuang, San-hui; Chung, Gregory K. W. K. – National Center for Research on Evaluation, Standards, and Student Testing (CRESST), 2004
Collaborative problem-solving skills are considered necessary skills for success in today's world of work and school. Cooperative learning refers to learning environments in which small groups of people work together to achieve a common goal, and problem solving is defined as "cognitive processing directed at achieving a common goal when no…
Descriptors: Computer Assisted Testing, Cooperative Learning, Problem Solving, Skill Analysis
Miller, Tristan – Journal of Educational Computing Research, 2003
Latent semantic analysis (LSA) is an automated, statistical technique for comparing the semantic similarity of words or documents. In this article, I examine the application of LSA to automated essay scoring. I compare LSA methods to earlier statistical methods for assessing essay quality, and critically review contemporary essay-scoring systems…
Descriptors: Semantics, Test Scoring Machines, Essays, Semantic Differential
Riedel, Eric; Dexter, Sara L.; Scharber, Cassandra; Doering, Aaron – Journal of Educational Computing Research, 2006
Research on computer-based writing evaluation has only recently focused on the potential for providing formative feedback rather than summative assessment. This study tests the impact of an automated essay scorer (AES) that provides formative feedback on essay drafts written as part of a series of online teacher education case studies. Seventy…
Descriptors: Preservice Teacher Education, Writing Evaluation, Case Studies, Formative Evaluation
Nietfeld, John L.; Enders, Craig K; Schraw, Gregory – Educational and Psychological Measurement, 2006
Researchers studying monitoring accuracy currently use two different indexes to estimate accuracy: relative accuracy and absolute accuracy. The authors compared the distributional properties of two measures of monitoring accuracy using Monte Carlo procedures that fit within these categories. They manipulated the accuracy of judgments (i.e., chance…
Descriptors: Monte Carlo Methods, Test Items, Computation, Metacognition
Peer reviewedAnderson, Paul S. – International Journal of Educology, 1988
Seven formats of educational testing were compared according to student preferences/perceptions of how well each test method evaluates learning. Formats compared include true/false, multiple-choice, matching, multi-digit testing (MDT), fill-in-the-blank, short answer, and essay. Subjects were 1,440 university students. Results indicate that tests…
Descriptors: Achievement Tests, College Students, Comparative Analysis, Computer Assisted Testing
Peer reviewedvan den Bergh, Huub – Applied Psychological Measurement, 1990
In this study, 590 third graders from 12 Dutch schools took 32 tests indicating 16 semantic Structure-of-Intellect (SI) abilities and 1 of 4 reading comprehension tests, involving either multiple-choice or open-ended items. Results indicate that item type for reading comprehension is congeneric with respect to SI abilities measured. (TJH)
Descriptors: Comparative Testing, Computer Assisted Testing, Construct Validity, Elementary Education
Gu, Lixiong; Drake, Samuel; Wolfe, Edward W. – Journal of Technology, Learning, and Assessment, 2006
This study seeks to determine whether item features are related to observed differences in item difficulty (DIF) between computer- and paper-based test delivery media. Examinees responded to 60 quantitative items similar to those found on the GRE general test in either a computer-based or paper-based medium. Thirty-eight percent of the items were…
Descriptors: Test Bias, Test Items, Educational Testing, Student Evaluation
Johnson, Martin; Green, Sylvia – Journal of Technology, Learning, and Assessment, 2006
The transition from paper-based to computer-based assessment raises a number of important issues about how mode might affect children's performance and question answering strategies. In this project 104 eleven-year-olds were given two sets of matched mathematics questions, one set on-line and the other on paper. Facility values were analyzed to…
Descriptors: Student Attitudes, Computer Assisted Testing, Program Effectiveness, Elementary School Students
Horkay, Nancy; Bennett, Randy Elliott; Allen, Nancy; Kaplan, Bruce; Yan, Fred – Journal of Technology, Learning, and Assessment, 2006
This study investigated the comparability of scores for paper and computer versions of a writing test administered to eighth grade students. Two essay prompts were given on paper to a nationally representative sample as part of the 2002 main NAEP writing assessment. The same two essay prompts were subsequently administered on computer to a second…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Program Effectiveness

Direct link
