Publication Date
| In 2026 | 0 |
| Since 2025 | 16 |
| Since 2022 (last 5 years) | 64 |
| Since 2017 (last 10 years) | 155 |
| Since 2007 (last 20 years) | 250 |
Descriptor
| Computer Assisted Testing | 362 |
| Multiple Choice Tests | 362 |
| Foreign Countries | 109 |
| Test Items | 109 |
| Test Construction | 83 |
| Student Evaluation | 68 |
| Higher Education | 65 |
| Test Format | 64 |
| College Students | 57 |
| Scores | 54 |
| Comparative Analysis | 45 |
| More ▼ | |
Source
Author
| Anderson, Paul S. | 6 |
| Clariana, Roy B. | 4 |
| Wise, Steven L. | 4 |
| Alonzo, Julie | 3 |
| Anderson, Daniel | 3 |
| Ben Seipel | 3 |
| Bridgeman, Brent | 3 |
| Kosh, Audra E. | 3 |
| Mark L. Davison | 3 |
| Nese, Joseph F. T. | 3 |
| Park, Jooyong | 3 |
| More ▼ | |
Publication Type
Education Level
Location
| United Kingdom | 14 |
| Australia | 9 |
| Canada | 9 |
| Turkey | 9 |
| Germany | 5 |
| Spain | 4 |
| Taiwan | 4 |
| Texas | 4 |
| Arizona | 3 |
| Europe | 3 |
| Indonesia | 3 |
| More ▼ | |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 2 |
Assessments and Surveys
What Works Clearinghouse Rating
| Does not meet standards | 1 |
Kosh, Audra E.; Greene, Jeffrey A.; Murphy, P. Karen; Burdick, Hal; Firetto, Carla M.; Elmore, Jeff – Educational Measurement: Issues and Practice, 2018
We explored the feasibility of using automated scoring to assess upper-elementary students' reading ability through analysis of transcripts of students' small-group discussions about texts. Participants included 35 fourth-grade students across two classrooms that engaged in a literacy intervention called Quality Talk. During the course of one…
Descriptors: Computer Assisted Testing, Small Group Instruction, Group Discussion, Student Evaluation
Kosh, Audra E.; Greene, Jeffrey A.; Murphy, P. Karen; Burdick, Hal; Firetto, Carla M.; Elmore, Jeff – Grantee Submission, 2018
We explored the feasibility of using automated scoring to assess upper-elementary students' reading ability through analysis of transcripts of students' small-group discussions about texts. Participants included 35 fourth-grade students across two classrooms that engaged in a literacy intervention called Quality Talk. During the course of one…
Descriptors: Computer Assisted Testing, Small Group Instruction, Group Discussion, Student Evaluation
Li, Dongmei; Yi, Qing; Harris, Deborah – ACT, Inc., 2017
In preparation for online administration of the ACT® test, ACT conducted studies to examine the comparability of scores between online and paper administrations, including a timing study in fall 2013, a mode comparability study in spring 2014, and a second mode comparability study in spring 2015. This report presents major findings from these…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Comparative Analysis, Test Format
Boyd, Aimee M.; Dodd, Barbara; Fitzpatrick, Steven – Applied Measurement in Education, 2013
This study compared several exposure control procedures for CAT systems based on the three-parameter logistic testlet response theory model (Wang, Bradlow, & Wainer, 2002) and Masters' (1982) partial credit model when applied to a pool consisting entirely of testlets. The exposure control procedures studied were the modified within 0.10 logits…
Descriptors: Computer Assisted Testing, Item Response Theory, Test Construction, Models
Gierl, Mark J.; Lai, Hollis; Pugh, Debra; Touchie, Claire; Boulais, André-Philippe; De Champlain, André – Applied Measurement in Education, 2016
Item development is a time- and resource-intensive process. Automatic item generation integrates cognitive modeling with computer technology to systematically generate test items. To date, however, items generated using cognitive modeling procedures have received limited use in operational testing situations. As a result, the psychometric…
Descriptors: Psychometrics, Multiple Choice Tests, Test Items, Item Analysis
Ihme, Jan Marten; Senkbeil, Martin; Goldhammer, Frank; Gerick, Julia – European Educational Research Journal, 2017
The combination of different item formats is found quite often in large scale assessments, and analyses on the dimensionality often indicate multi-dimensionality of tests regarding the task format. In ICILS 2013, three different item types (information-based response tasks, simulation tasks, and authoring tasks) were used to measure computer and…
Descriptors: Foreign Countries, Computer Literacy, Information Literacy, International Assessment
Burstein, Jill; McCaffrey, Dan; Beigman Klebanov, Beata; Ling, Guangming – Grantee Submission, 2017
No significant body of research examines writing achievement and the specific skills and knowledge in the writing domain for postsecondary (college) students in the U.S., even though many at-risk students lack the prerequisite writing skills required to persist in their education. This paper addresses this gap through a novel…
Descriptors: Computer Software, Writing Evaluation, Writing Achievement, College Students
Iwamoto, Darren H.; Hargis, Jace; Taitano, Erik Jon; Vuong, Ky – Turkish Online Journal of Distance Education, 2017
Lower than expected high-stakes examination scores were being observed in a first-year general psychology class. This research sought an alternate approach that would assist students in preparing for high-stakes examinations. The purpose of this study was to measure the effectiveness of an alternate teaching approach based on the testing effect to…
Descriptors: Educational Technology, High Stakes Tests, College Freshmen, Introductory Courses
Kiliçkaya, Ferit – Teaching English with Technology, 2017
This study aimed to determine EFL (English as a Foreign Language) teachers' perceptions and experience regarding their use of "GradeCam Go!" to grade multiple choice tests. The results of the study indicated that the participants overwhelmingly valued "GradeCam Go!" due to its features such as grading printed forms for…
Descriptors: English (Second Language), Second Language Instruction, Teacher Attitudes, Multiple Choice Tests
Fisteus, Jesus Arias; Pardo, Abelardo; García, Norberto Fernández – Journal of Science Education and Technology, 2013
Although technology for automatic grading of multiple choice exams has existed for several decades, it is not yet as widely available or affordable as it should be. The main reasons preventing this adoption are the cost and the complexity of the setup procedures. In this paper, "Eyegrade," a system for automatic grading of multiple…
Descriptors: Multiple Choice Tests, Grading, Computer Assisted Testing, Man Machine Systems
Shermis, Mark D.; Mao, Liyang; Mulholland, Matthew; Kieftenbeld, Vincent – International Journal of Testing, 2017
This study uses the feature sets employed by two automated scoring engines to determine if a "linguistic profile" could be formulated that would help identify items that are likely to exhibit differential item functioning (DIF) based on linguistic features. Sixteen items were administered to 1200 students where demographic information…
Descriptors: Computer Assisted Testing, Scoring, Hypothesis Testing, Essays
Çetinavci, Ugur Recep; Öztürk, Ismet – Online Submission, 2017
Pragmatic competence is among the explicitly acknowledged sub-competences that make the communicative competence in any language (Bachman & Palmer, 1996; Council of Europe, 2001). Within the notion of pragmatic competence itself, "implicature (implied meanings)" comes to the fore as one of the five main areas there (Levinson, 1983).…
Descriptors: Test Construction, Computer Assisted Testing, Communicative Competence (Languages), Second Language Instruction
Rybanov, Alexander Aleksandrovich – Turkish Online Journal of Distance Education, 2013
Is offered the set of criteria for assessing efficiency of the process forming the answers to multiple-choice test items. To increase accuracy of computer-assisted testing results, it is suggested to assess dynamics of the process of forming the final answer using the following factors: loss of time factor and correct choice factor. The model…
Descriptors: Evaluation Criteria, Efficiency, Multiple Choice Tests, Test Items
Lin, Sheau-Wen; Liu, Yu; Chen, Shin-Feng; Wang, Jing-Ru; Kao, Huey-Lien – International Journal of Science and Mathematics Education, 2015
The purpose of this study was to develop a computer-based assessment for elementary school students' listening comprehension of science talk within an inquiry-oriented environment. The development procedure had 3 steps: a literature review to define the framework of the test, collecting and identifying key constructs of science talk, and…
Descriptors: Listening Comprehension, Science Education, Computer Assisted Testing, Test Construction
Kahraman, Nilüfer – Eurasian Journal of Educational Research, 2014
Problem: Practitioners working with multiple-choice tests have long utilized Item Response Theory (IRT) models to evaluate the performance of test items for quality assurance. The use of similar applications for performance tests, however, is often encumbered due to the challenges encountered in working with complicated data sets in which local…
Descriptors: Item Response Theory, Licensing Examinations (Professions), Performance Based Assessment, Computer Simulation

Peer reviewed
Direct link
