NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Does not meet standards1
Showing 1 to 15 of 53 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Hei-Chia; Maslim, Martinus; Kan, Chia-Hao – Education and Information Technologies, 2023
Distance learning frees the learning process from spatial constraints. Each mode of distance learning, including synchronous and asynchronous learning, has disadvantages. In synchronous learning, students have network bandwidth and noise concerns, but in asynchronous learning, they have fewer opportunities for engagement, such as asking questions.…
Descriptors: Automation, Artificial Intelligence, Computer Assisted Testing, Asynchronous Communication
Peer reviewed Peer reviewed
Direct linkDirect link
Semere Kiros Bitew; Amir Hadifar; Lucas Sterckx; Johannes Deleu; Chris Develder; Thomas Demeester – IEEE Transactions on Learning Technologies, 2024
Multiple-choice questions (MCQs) are widely used in digital learning systems, as they allow for automating the assessment process. However, owing to the increased digital literacy of students and the advent of social media platforms, MCQ tests are widely shared online, and teachers are continuously challenged to create new questions, which is an…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Test Construction, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Orchard, Ryan K. – Journal of Educational Technology Systems, 2019
Learning management systems (LMS) allow for a variety of ways in which online multiple-choice assessments ("tests") can be configured, including the ability to allow for multiple attempts and options for which of and how the attempts will count. These options are usually chosen according to the instinct of the instructor; however, LMS…
Descriptors: Integrated Learning Systems, Data Use, Electronic Learning, Assignments
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L. – Educational Measurement: Issues and Practice, 2017
The rise of computer-based testing has brought with it the capability to measure more aspects of a test event than simply the answers selected or constructed by the test taker. One behavior that has drawn much research interest is the time test takers spend responding to individual multiple-choice items. In particular, very short response…
Descriptors: Guessing (Tests), Multiple Choice Tests, Test Items, Reaction Time
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Toker, Deniz – TESL-EJ, 2019
The central purpose of this paper is to examine validity problems arising from the multiple-choice items and technical passages in the Test of English as a Foreign Language Internet-based Test (TOEFL iBT) reading section, primarily concentrating on construct-irrelevant variance (Messick, 1989). My personal TOEFL iBT experience, along with my…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Boyd, Aimee M.; Dodd, Barbara; Fitzpatrick, Steven – Applied Measurement in Education, 2013
This study compared several exposure control procedures for CAT systems based on the three-parameter logistic testlet response theory model (Wang, Bradlow, & Wainer, 2002) and Masters' (1982) partial credit model when applied to a pool consisting entirely of testlets. The exposure control procedures studied were the modified within 0.10 logits…
Descriptors: Computer Assisted Testing, Item Response Theory, Test Construction, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Fisteus, Jesus Arias; Pardo, Abelardo; García, Norberto Fernández – Journal of Science Education and Technology, 2013
Although technology for automatic grading of multiple choice exams has existed for several decades, it is not yet as widely available or affordable as it should be. The main reasons preventing this adoption are the cost and the complexity of the setup procedures. In this paper, "Eyegrade," a system for automatic grading of multiple…
Descriptors: Multiple Choice Tests, Grading, Computer Assisted Testing, Man Machine Systems
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rybanov, Alexander Aleksandrovich – Turkish Online Journal of Distance Education, 2013
Is offered the set of criteria for assessing efficiency of the process forming the answers to multiple-choice test items. To increase accuracy of computer-assisted testing results, it is suggested to assess dynamics of the process of forming the final answer using the following factors: loss of time factor and correct choice factor. The model…
Descriptors: Evaluation Criteria, Efficiency, Multiple Choice Tests, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Martin, Dona L.; Itter, Diane – Australian Journal of Teacher Education, 2014
When our focus is on assessment educators should work to value the nature of assessment. This paper presents a new approach to multiple-choice competency testing in mathematics education. The instrument discussed here reflects student competence, encourages self-regulatory learning behaviours and links content with current curriculum documents and…
Descriptors: Foreign Countries, Teacher Education, Multiple Choice Tests, Mathematics Education
Peer reviewed Peer reviewed
Direct linkDirect link
Alsubait, Tahani; Parsia, Bijan; Sattler, Uli – Research in Learning Technology, 2012
Different computational models for generating analogies of the form "A is to B as C is to D" have been proposed over the past 35 years. However, analogy generation is a challenging problem that requires further research. In this article, we present a new approach for generating analogies in Multiple Choice Question (MCQ) format that can be used…
Descriptors: Computer Assisted Testing, Programming, Computer Software, Computer Software Evaluation
Schifter, Catherine C.; Carey, Martha – International Association for Development of the Information Society, 2014
The No Child Left Behind (NCLB) legislation spawned a plethora of standardized testing services for all the high stakes testing required by the law. We argue that one-size-fits all assessments disadvantage students who are English Language Learners, in the USA, as well as students with limited economic resources, special needs, and not reading on…
Descriptors: Standardized Tests, Models, Evaluation Methods, Educational Legislation
Peer reviewed Peer reviewed
Direct linkDirect link
Wan, Lei; Henly, George A. – Applied Measurement in Education, 2012
Many innovative item formats have been proposed over the past decade, but little empirical research has been conducted on their measurement properties. This study examines the reliability, efficiency, and construct validity of two innovative item formats--the figural response (FR) and constructed response (CR) formats used in a K-12 computerized…
Descriptors: Test Items, Test Format, Computer Assisted Testing, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Gekara, Victor Oyaro; Bloor, Michael; Sampson, Helen – Journal of Vocational Education and Training, 2011
Vocational education and training (VET) concerns the cultivation and development of specific skills and competencies, in addition to broad underpinning knowledge relating to paid employment. VET assessment is, therefore, designed to determine the extent to which a trainee has effectively acquired the knowledge, skills, and competencies required by…
Descriptors: Marine Education, Occupational Safety and Health, Computer Assisted Testing, Vocational Education
Peer reviewed Peer reviewed
Direct linkDirect link
Vannest, Kimberly J.; Parker, Richard; Dyer, Nicole – Journal of Special Education, 2011
This article presents procedures and results from a 2-year project developing science key vocabulary (KV) short tests suitable for progress monitoring Grade 5 science in Texas public schools using computer-generated, -administered, and -scored assessments. KV items included KV definitions and important usages in a multiple-choice cloze format. A…
Descriptors: Grade 5, Low Achievement, Vocabulary, Science Tests
Nese, Joseph F. T.; Anderson, Daniel; Hoelscher, Kyle; Tindal, Gerald; Alonzo, Julie – Behavioral Research and Teaching, 2011
Curriculum-based measurement (CBM) is designed to measure students' academic status and growth so the effectiveness of instruction may be evaluated. In the most popular forms of reading CBM, the student's oral reading fluency is assessed. This behavior is difficult to sample in a computer-based format, a limitation that may be a function of the…
Descriptors: Curriculum Based Assessment, Silent Reading, Reading Fluency, Vocabulary
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4