NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 436 to 450 of 635 results Save | Export
Scharber, Cassandra; Dexter, Sara; Riedel, Eric – Journal of Technology, Learning, and Assessment, 2008
The purpose of this research is to analyze preservice teachers' use of and reactions to an automated essay scorer used within an online, case-based learning environment called ETIPS. Data analyzed include post-assignment surveys, a user log of students' actions within the cases, instructor-assigned scores on final essays, and interviews with four…
Descriptors: Test Scoring Machines, Essays, Student Experience, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Ramakishnan, Sadhu Balasundaram; Ramadoss, Balakrishnan – International Journal on E-Learning, 2009
Over the past several decades, a wider range of assessment strategies has gained prominence in classrooms, including complex assessment items such as individual or group projects, student journals and other creative writing tasks, graphic/artistic representations of knowledge, clinical interviews, student presentations and performances, peer- and…
Descriptors: Evaluation Problems, Web Based Instruction, Program Effectiveness, Internet
Peer reviewed Peer reviewed
Walker, N. William; Myrick, Carolyn Cobb – Journal of School Psychology, 1985
Ethical considerations in use of computers in psychological testing and assessment are discussed. Existing ethics and standards that provide guidance to users of computerized test interpretation and report-writing programs are discussed and guidelines are suggested. Areas of appropriate use of computers in testing and assessment are explored.…
Descriptors: Accountability, Computer Assisted Testing, Confidentiality, Ethics
Peer reviewed Peer reviewed
McMeen, George R.; Thorman, Joseph H. – Clearing House, 1981
Citing the importance to students of immediate feedback on multiple choice test items, the author discusses four electronic or mechanical devices for rapid test scoring. (SJL)
Descriptors: Computer Assisted Testing, Elementary Secondary Education, Feedback, Multiple Choice Tests
Peer reviewed Peer reviewed
Davey, Tim; And Others – Journal of Educational Measurement, 1997
The development and scoring of a recently introduced computer-based writing skills test is described. The test asks the examinee to edit a writing passage presented on a computer screen. Scoring difficulties are addressed through the combined use of option weighting and the sequential probability ratio test. (SLD)
Descriptors: Computer Assisted Testing, Educational Innovation, Probability, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Graham, Charles R.; Tripp, Tonya; Wentworth, Nancy – Journal of Educational Computing Research, 2009
This study explores the efforts at Brigham Young University to improve preservice candidates' technology integration using the Teacher Work Sample (TWS) as an assessment tool. Baseline data that was analyzed from 95 TWSs indicated that students were predominantly using technology for productivity and information presentation purposes even though…
Descriptors: Field Instruction, Work Sample Tests, Technology Integration, Educational Technology
Peer reviewed Peer reviewed
Direct linkDirect link
Hung, Pi-Hsia; Lin, Yu-Fen; Hwang, Gwo-Jen – Educational Technology & Society, 2010
Ubiquitous computing and mobile technologies provide a new perspective for designing innovative outdoor learning experiences. The purpose of this study is to propose a formative assessment design for integrating PDAs into ecology observations. Three learning activities were conducted in this study. An action research approach was applied to…
Descriptors: Foreign Countries, Feedback (Response), Action Research, Observation
Kump, Ann – 1992
Directions are given for scoring typing tests taken on a typewriter or on a computer using special software. The speed score (gross words per minute) is obtained by determining the total number of strokes typed, and dividing by 25. The accuracy score is obtained by comparing the examinee's test paper to the appropriate scoring key and counting the…
Descriptors: Computer Assisted Testing, Employment Qualifications, Guidelines, Job Applicants
Anderson, Richard Ivan – 1980
Features of a probabilistic testing system that has been implemented on the "cerl" PLATO computer system are described. The key feature of the system is the manner in which an examinee responds to each test item; the examinee distributes probabilities among the alternatives of each item by positioning a small square on or within an…
Descriptors: Computer Assisted Testing, Data Collection, Feedback, Probability
Peer reviewed Peer reviewed
Zatz, Joel L. – American Journal of Pharmaceutical Education, 1982
A method for computer grading pharmaceutical calculations exams in which students convert their answers into scientific notation and enter their solutions onto a mark sense form is described. A table is generated and then posted listing student identification numbers, exam grades, and which problems were missed. (Author/MLW)
Descriptors: Computation, Computer Assisted Testing, Computer Programs, Grading
Peer reviewed Peer reviewed
Stocking, Martha L. – Journal of Educational and Behavioral Statistics, 1996
An alternative method for scoring adaptive tests, based on number-correct scores, is explored and compared with a method that relies more directly on item response theory. Using the number-correct score with necessary adjustment for intentional differences in adaptive test difficulty is a statistically viable scoring method. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Item Response Theory
Peer reviewed Peer reviewed
Bennett, Randy Elliot; Morley, Mary; Quardt, Dennis – Applied Psychological Measurement, 2000
Describes three open-ended response types that could broaden the conception of mathematical problem solving used in computerized admissions tests: (1) mathematical expression (ME); (2) generating examples (GE); and (3) and graphical modeling (GM). Illustrates how combining ME, GE, and GM can form extended constructed response problems. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Constructed Response, Mathematics Tests
Peer reviewed Peer reviewed
McHenry, Bill; Griffith, Leonard; McHenry, Jim – T.H.E. Journal, 2004
Imagine administering an online standardized test to an entire class of 11th-grade students when, halfway through the exam, the server holding the test hits a snag and throws everyone offline. Imagine another scenario in which an elementary school has very few computers so teachers must bus their students to the local high school for a timed test.…
Descriptors: Computer Assisted Testing, Risk, Evaluation Methods, Federal Legislation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hu, Xiangen, Ed.; Barnes, Tiffany, Ed.; Hershkovitz, Arnon, Ed.; Paquette, Luc, Ed. – International Educational Data Mining Society, 2017
The 10th International Conference on Educational Data Mining (EDM 2017) is held under the auspices of the International Educational Data Mining Society at the Optics Velley Kingdom Plaza Hotel, Wuhan, Hubei Province, in China. This years conference features two invited talks by: Dr. Jie Tang, Associate Professor with the Department of Computer…
Descriptors: Data Analysis, Data Collection, Graphs, Data Use
Kaplan, Randy M.; Bennett, Randy Elliot – 1994
This study explores the potential for using a computer-based scoring procedure for the formulating-hypotheses (F-H) item. This item type presents a situation and asks the examinee to generate explanations for it. Each explanation is judged right or wrong, and the number of creditable explanations is summed to produce an item score. Scores were…
Descriptors: Automation, Computer Assisted Testing, Correlation, Higher Education
Pages: 1  |  ...  |  26  |  27  |  28  |  29  |  30  |  31  |  32  |  33  |  34  |  ...  |  43