NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; Soland, James; Dupray, Laurence M. – Journal of Applied Testing Technology, 2021
Technology-Enhanced Items (TEIs) have been purported to be more motivating and engaging to test takers than traditional multiple-choice items. The claim of enhanced engagement, however, has thus far received limited research attention. This study examined the rates of rapid-guessing behavior received by three types of items (multiple-choice,…
Descriptors: Test Items, Guessing (Tests), Multiple Choice Tests, Achievement Tests
Wang, Shichao; Li, Dongmei; Steedle, Jeffrey – ACT, Inc., 2021
Speeded tests set time limits so that few examinees can reach all items, and power tests allow most test-takers sufficient time to attempt all items. Educational achievement tests are sometimes described as "timed power tests" because the amount of time provided is intended to allow nearly all students to complete the test, yet this…
Descriptors: Timed Tests, Test Items, Achievement Tests, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Kuo, Bor-Chen; Liao, Chen-Huei; Pai, Kai-Chih; Shih, Shu-Chuan; Li, Cheng-Hsuan; Mok, Magdalena Mo Ching – Educational Psychology, 2020
The current study explores students' collaboration and problem solving (CPS) abilities using a human-to-agent (H-A) computer-based collaborative problem solving assessment. Five CPS assessment units with 76 conversation-based items were constructed using the PISA 2015 CPS framework. In the experiment, 53,855 ninth and tenth graders in Taiwan were…
Descriptors: Computer Assisted Testing, Cooperative Learning, Problem Solving, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Wan, Lei; Henly, George A. – Applied Measurement in Education, 2012
Many innovative item formats have been proposed over the past decade, but little empirical research has been conducted on their measurement properties. This study examines the reliability, efficiency, and construct validity of two innovative item formats--the figural response (FR) and constructed response (CR) formats used in a K-12 computerized…
Descriptors: Test Items, Test Format, Computer Assisted Testing, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Howard, Keith E.; Anderson, Kenneth A. – Middle Grades Research Journal, 2010
Stereotype threat research has demonstrated how presenting situational cues in a testing environment, such as raising the salience of negative stereotypes, can adversely affect test performance (Perry, Steele, & Hilliard, 2003; Steele & Aronson, 1995) and expectancy (Cadinu, Maass, Frigerio, Impagliazzo, & Latinotti, 2003; Stangor,…
Descriptors: Cues, Stereotypes, Standardized Tests, Foreign Countries
Atkinson, George F.; Doadt, Edward – Assessment in Higher Education, 1980
Some perceived difficulties with conventional multiple choice tests are mentioned, and a modified form of examination is proposed. It uses a computer program to award partial marks for partially correct answers, full marks for correct answers, and check for widespread misunderstanding of an item or subject. (MSE)
Descriptors: Achievement Tests, Computer Assisted Testing, Higher Education, Multiple Choice Tests
Dunkleberger, Gary; Heikkinen, Henry – Curriculum Review, 1983
Discusses frustrations teachers encounter when faced with mastery-based instruction, and how microcomputers can be effective in mastery testing, freeing teachers from all but instructional duties. Other unexpected benefits of microcomputer use relating to record keeping, teacher role, and multiple choice tests are outlined. (MBR)
Descriptors: Achievement Tests, Classroom Environment, Computer Assisted Testing, Computer Programs
Davis, Michael G.; Lunderville, Dan – Computing Teacher, 1983
Describes computer-assisted instruction and testing system used at University of Wisconsin which gives students opportunity to take individualized quizzes via interactive terminal, assigns individual students remedial work based on quiz performance, and accumulates student performance data for immediate use by instructor. Conversions to several…
Descriptors: Achievement Tests, Computer Assisted Instruction, Computer Assisted Testing, Computer Managed Instruction
Peer reviewed Peer reviewed
Anderson, Paul S. – International Journal of Educology, 1988
Seven formats of educational testing were compared according to student preferences/perceptions of how well each test method evaluates learning. Formats compared include true/false, multiple-choice, matching, multi-digit testing (MDT), fill-in-the-blank, short answer, and essay. Subjects were 1,440 university students. Results indicate that tests…
Descriptors: Achievement Tests, College Students, Comparative Analysis, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Schneider, Thorsten – Electronic Journal of e-Learning, 2004
Selecting elite students out of a huge collective is a difficult task. The main problem is to provide automated processes to reduce human work. ECMS (Educational Contest Management System) is an online tool approach to help--fully or partly automated--with the task of selecting such elite students out of a mass of candidates. International tests…
Descriptors: Management Systems, Selective Admission, Admission Criteria, Automation
Thompson, Janet G.; Weiss, David J. – 1980
The relative validity of adaptive and conventional testing strategies using non-test variables as one set of external criteria was investigated. A total of 101 college students completed both a variable length stradaptive test and peaked conventional test; a second group of 131 college students completed a variable length Bayesian adaptive test…
Descriptors: Achievement Tests, Adaptive Testing, College Entrance Examinations, Computer Assisted Testing
Anderson, Paul S.; Kanzler, Eileen M. – 1985
Test scores were compared for two types of objective achievement tests--multiple choice tests and the recently developed Multi-Digit Test (MDT) procedure. MDT is an approximation of the fill-in-the-blank technique. Students select their answers from long lists of alphabetized terms, with each answer corresponding to a number from 001 to 999. The…
Descriptors: Achievement Tests, Cloze Procedure, Comparative Testing, Computer Assisted Testing