NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Stefan O'Grady – International Journal of Listening, 2025
Language assessment is increasingly computermediated. This development presents opportunities with new task formats and equally a need for renewed scrutiny of established conventions. Recent recommendations to increase integrated skills assessment in lecture comprehension tests is premised on empirical research that demonstrates enhanced construct…
Descriptors: Language Tests, Lecture Method, Listening Comprehension Tests, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Boote, Stacy K.; Boote, David N.; Williamson, Steven – Cogent Education, 2021
Several decades of research suggesting differences in test performance across paper-based and computer-based assessments have been largely ameliorated through attention to test presentation equivalence, though no studies to date have focused on graph comprehension items. Test items requiring graph comprehension are increasingly common but may be…
Descriptors: Graduate Students, Masters Programs, Business Administration Education, Graphs
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sahin, Muhittin; Aydin, Furkan; Sulak, Sema; Müftüoglu, Cennet Terzi; Tepgeç, Mustafa; Yilmaz, Gizem Karaoglan; Yilmaz, Ramazan; Yurdugül, Halil – International Association for Development of the Information Society, 2021
The use of technology for teaching and learning has created a paradigm shifting in learning environments and learning process, and also the paradigm shifting has also affected the assessment processes. In addition to these, online environments provide more opportunities to assess of the learners. In this study, the Adaptive Mastery Testing (AMT)…
Descriptors: Teaching Methods, Learning Processes, Adaptive Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Kaya, Elif; O'Grady, Stefan; Kalender, Ilker – Language Testing, 2022
Language proficiency testing serves an important function of classifying examinees into different categories of ability. However, misclassification is to some extent inevitable and may have important consequences for stakeholders. Recent research suggests that classification efficacy may be enhanced substantially using computerized adaptive…
Descriptors: Item Response Theory, Test Items, Language Tests, Classification
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gutl, Christian; Lankmayr, Klaus; Weinhofer, Joachim; Hofler, Margit – Electronic Journal of e-Learning, 2011
Research in automated creation of test items for assessment purposes became increasingly important during the recent years. Due to automatic question creation it is possible to support personalized and self-directed learning activities by preparing appropriate and individualized test items quite easily with relatively little effort or even fully…
Descriptors: Test Items, Semantics, Multilingualism, Language Processing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bulut, Okan; Kan, Adnan – Eurasian Journal of Educational Research, 2012
Problem Statement: Computerized adaptive testing (CAT) is a sophisticated and efficient way of delivering examinations. In CAT, items for each examinee are selected from an item bank based on the examinee's responses to the items. In this way, the difficulty level of the test is adjusted based on the examinee's ability level. Instead of…
Descriptors: Adaptive Testing, Computer Assisted Testing, College Entrance Examinations, Graduate Students
Peer reviewed Peer reviewed
Direct linkDirect link
Moore, David Richard – Journal of Interactive Online Learning, 2006
Instructional strategies for successfully teaching concepts are found throughout the instructional design literature. These strategies primarily consist of presenting learners with definitions, examples, and non-examples. While examples are important presentation instruments, theorist suggests that examples should not be re-used in the assessment…
Descriptors: Instructional Design, Concept Formation, Test Items, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Xu, Yuejin; Iran-Nejad, Asghar; Thoma, Stephen J. – Journal of Interactive Online Learning, 2007
The purpose of the study was to determine comparability of an online version to the original paper-pencil version of Defining Issues Test 2 (DIT2). This study employed methods from both Classical Test Theory (CTT) and Item Response Theory (IRT). Findings from CTT analyses supported the reliability and discriminant validity of both versions.…
Descriptors: Computer Assisted Testing, Test Format, Comparative Analysis, Test Theory
PDF pending restoration PDF pending restoration
Plake, Barbara S.; And Others – 1994
In self-adapted testing (SAT), examinees select the difficulty level of items administered. This study investigated three variations of prior information provided when taking an SAT: (1) no information (examinees selected item difficulty levels without prior information); (2) view (examinees inspected a typical item from each difficulty level…
Descriptors: Adaptive Testing, College Students, Computer Assisted Testing, Difficulty Level
Peer reviewed Peer reviewed
Bennett, Randy Elliot; Morley, Mary; Quardt, Dennis; Rock, Donald A.; Singley, Mark K.; Katz, Irvin R.; Nhouyvanisvong, Adisack – Journal of Educational Measurement, 1999
Evaluated a computer-delivered response type for measuring quantitative skill, the "Generating Examples" (GE) response type, which presents under-determined problems that can have many right answers. Results from 257 graduate students and applicants indicate that GE scores are reasonably reliable, but only moderately related to Graduate…
Descriptors: College Applicants, Computer Assisted Testing, Graduate Students, Graduate Study
Bennett, Randy Elliot; Rock, Donald A. – 1993
Formulating-Hypotheses (F-H) items present a situation and ask the examinee to generate as many explanations for it as possible. This study examined the generalizability, validity, and examinee perceptions of a computer-delivered version of the task. Eight F-H questions were administered to 192 graduate students. Half of the items restricted…
Descriptors: Computer Assisted Testing, Difficulty Level, Generalizability Theory, Graduate Students
Peer reviewed Peer reviewed
Clariana, Roy B. – International Journal of Instructional Media, 2004
This investigation considers the instructional effects of color as an over-arching context variable when learning from computer displays. The purpose of this investigation is to examine the posttest retrieval effects of color as a local, extra-item non-verbal lesson context variable for constructed-response versus multiple-choice posttest…
Descriptors: Instructional Effectiveness, Graduate Students, Color, Computer System Design
Powell, Z. Emily – 1992
Little research exists on the psychological impacts of computerized adaptive testing (CAT) and how it may affect test performance. Three CAT procedures were examined, in which items were selected to match students' achievement levels, from the item pool at random, or according to student choice of item difficulty levels. Twenty-four graduate…
Descriptors: Academic Achievement, Adaptive Testing, Comparative Testing, Computer Assisted Testing
Roos, Linda L.; And Others – 1992
Computerized adaptive (CA) testing uses an algorithm to match examinee ability to item difficulty, while self-adapted (SA) testing allows the examinee to choose the difficulty of his or her items. Research comparing SA and CA testing has shown that examinees experience lower anxiety and improved performance with SA testing. All previous research…
Descriptors: Ability Identification, Adaptive Testing, Algebra, Algorithms
Peer reviewed Peer reviewed
Wise, Steven L.; And Others – Journal of Educational Measurement, 1992
Performance of 156 undergraduate and 48 graduate students on a self-adapted test (SFAT)--students choose the difficulty level of their test items--was compared with performance on a computer-adapted test (CAT). Those taking the SFAT obtained higher ability scores and reported lower posttest state anxiety than did CAT takers. (SLD)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Difficulty Level