NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 391 to 405 of 510 results Save | Export
Chung, Gregory K. W. K.; Herl, Howard E.; Klein, Davina C. D.; O'Neil, Harold F., Jr.; Schacter, John – 1997
This report examines issues in the scale-up of assessment software from the Center for Research on Evaluation, Standards, and Student Testing (CRESST). "Scale-up" is used in a metaphorical sense, meaning adding new assessment tools to CRESST's assessment software. During the past several years, CRESST has been developing and evaluating a…
Descriptors: Computer Assisted Testing, Computer Software, Concept Mapping, Educational Assessment
Lee, Yong-Won – 2001
An essay test is now an integral part of the computer based Test of English as a Foreign Language (TOEFL-CBT). This paper provides a brief overview of the current TOEFL-CBT essay test, describes the operational procedures for essay scoring, including the Online Scoring Network (OSN) of the Educational Testing Service (ETS), and discusses major…
Descriptors: Computer Assisted Testing, English (Second Language), Essay Tests, Interrater Reliability
Peer reviewed Peer reviewed
Haller, Otto; Edgington, Eugene S. – Perceptual and Motor Skills, 1982
Current scoring procedures depend on unrealistic assumptions about subjects' performance on the rod-and-frame test. A procedure is presented which corrects for constant error, is sensitive to response strategy and consistency, and examines qualitative and quantitative aspects of performance and individual differences in laterality bias as defined…
Descriptors: Computer Assisted Testing, Cues, Error of Measurement, Individual Differences
Peer reviewed Peer reviewed
Bennett, Randy Elliot; Steffen, Manfred; Singley, Mark Kevin; Morley, Mary; Jacquemin, Daniel – Journal of Educational Measurement, 1997
Scoring accuracy and item functioning were studied for an open-ended response type test in which correct answers can take many different surface forms. Results with 1,864 graduate school applicants showed automated scoring to approximate the accuracy of multiple-choice scoring. Items functioned similarly to other item types being considered. (SLD)
Descriptors: Adaptive Testing, Automation, College Applicants, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
James, Cindy L. – Assessing Writing, 2006
How do scores from writing samples generated by computerized essay scorers compare to those generated by ''untrained'' human scorers and what combination of scores, if any, is more accurate at placing students in composition courses? This study endeavored to answer this two-part question by evaluating the correspondence between writing sample…
Descriptors: Writing (Composition), Predictive Validity, Scoring, Validity
Potenza, Maria T.; Stocking, Martha L. – 1994
A multiple choice test item is identified as flawed if it has no single best answer. In spite of extensive quality control procedures, the administration of flawed items to test-takers is inevitable. Common strategies for dealing with flawed items in conventional testing, grounded in the principle of fairness to test-takers, are reexamined in the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Multiple Choice Tests, Scoring
Schnipke, Deborah L.; Reese, Lynda M. – 1997
Two-stage and multistage test designs provide a way of roughly adapting item difficulty to test-taker ability. All test takers take a parallel stage-one test, and, based on their scores, they are routed to tests of different difficulty levels in subsequent stages. These designs provide some of the benefits of standard computerized adaptive testing…
Descriptors: Ability, Adaptive Testing, Algorithms, Comparative Analysis
Stricker, Lawrence J.; Alderton, David L. – 1991
The usefulness of response latency data for biographical inventory items was assessed for improving the inventory's validity. Focus was on assessing whether weighting item scores on the basis of their latencies improves the predictive validity of the inventory's total score. A total of 120 items from the Armed Services Applicant Profile (ASAP)…
Descriptors: Adults, Biographical Inventories, Computer Assisted Testing, Males
De Ayala, R. J.; And Others – 1990
Computerized adaptive testing procedures (CATPs) based on the graded response method (GRM) of F. Samejima (1969) and the partial credit model (PCM) of G. Masters (1982) were developed and compared. Both programs used maximum likelihood estimation of ability, and item selection was conducted on the basis of information. Two simulated data sets, one…
Descriptors: Ability Identification, Adaptive Testing, Comparative Analysis, Computer Assisted Testing
Merrill, Beverly; Peterson, Sarah – 1986
When the Mesa, Arizona Public Schools initiated an ambitious writing instruction program in 1978, two assessments based on student writing samples were developed. The first is based on a ninth grade proficiency test. If the student does not pass the test, high school remediation is provided. After 1987, students must pass this test in order to…
Descriptors: Computer Assisted Testing, Elementary Secondary Education, Graduation Requirements, Holistic Evaluation
Vale, C. David – 1985
The specification of a computerized adaptive test, like the specification of computer-assisted instruction, is easier and can be done by personnel who are not proficient in computer programming if an authoring language is provided. The Minnesota Computerized Adaptive Testing Language (MCATL) is an authoring language specifically designed for…
Descriptors: Adaptive Testing, Authoring Aids (Programing), Branching, Computer Assisted Instruction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sandene, Brent; Horkay, Nancy; Bennett, Randy Elliot; Allen, Nancy; Braswell, James; Kaplan, Bruce; Oranje, Andreas – National Center for Education Statistics, 2005
This publication presents the reports from two studies, Math Online (MOL) and Writing Online (WOL), part of the National Assessment of Educational Progress (NAEP) Technology-Based Assessment (TBA) project. Funded by the National Center for Education Statistics (NCES), the Technology-Based Assessment project is intended to explore the use of new…
Descriptors: Grade 8, Statistical Analysis, Scoring, Familiarity
O'Neil, Harold F., Jr.; Schacter, John – 1997
This document reviews several theoretical frameworks of problem-solving, provides a definition of the construct, suggests ways of measuring the construct, focuses on issues for assessment, and provides specifications for the computer-based assessment of problem solving. As defined in the model of the Center for Research on Evaluation, Standards,…
Descriptors: Computer Assisted Testing, Computer Software, Criteria, Educational Assessment
Martinez, Michael E.; And Others – 1990
Large-scale testing is dominated by the multiple-choice question format. Widespread use of the format is due, in part, to the ease with which multiple-choice items can be scored automatically. This paper examines automatic scoring procedures for an alternative item type: figural response. Figural response items call for the completion or…
Descriptors: Automation, Computer Assisted Testing, Educational Technology, Multiple Choice Tests
Peer reviewed Peer reviewed
Aiken, Lewis R. – Educational and Psychological Measurement, 1996
This article describes a set of 11 menu-driven procedures written in BASICA for MS-DOS based microcomputers for constructing several types of rating scales, attitude scales, and checklists, and for scoring responses to the constructed instruments. The uses of the program are described in detail. (SLD)
Descriptors: Attitude Measures, Check Lists, Computer Assisted Testing, Computer Software
Pages: 1  |  ...  |  23  |  24  |  25  |  26  |  27  |  28  |  29  |  30  |  31  |  ...  |  34