Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 2 |
| Since 2017 (last 10 years) | 79 |
| Since 2007 (last 20 years) | 240 |
Descriptor
| Computer Assisted Testing | 300 |
| Statistical Analysis | 300 |
| Scores | 101 |
| Foreign Countries | 99 |
| Comparative Analysis | 76 |
| Test Items | 65 |
| Correlation | 63 |
| Language Tests | 60 |
| Second Language Learning | 60 |
| English (Second Language) | 59 |
| College Students | 48 |
| More ▼ | |
Source
Author
| Alonzo, Julie | 4 |
| Tindal, Gerald | 4 |
| Weiss, David J. | 4 |
| Anderson, Daniel | 3 |
| Attali, Yigal | 3 |
| Chang, Hua-Hua | 3 |
| Choi, Seung W. | 3 |
| Lai, Cheng-Fei | 3 |
| Nese, Joseph F. T. | 3 |
| Tatsuoka, Kikumi | 3 |
| Alexander, Melody W. | 2 |
| More ▼ | |
Publication Type
Education Level
| Higher Education | 108 |
| Postsecondary Education | 80 |
| Elementary Education | 41 |
| Secondary Education | 38 |
| Grade 4 | 22 |
| Middle Schools | 22 |
| Grade 5 | 21 |
| Grade 6 | 20 |
| Grade 7 | 19 |
| Early Childhood Education | 18 |
| Grade 3 | 18 |
| More ▼ | |
Location
| Australia | 11 |
| China | 9 |
| Turkey | 9 |
| Texas | 8 |
| Germany | 7 |
| Iran | 7 |
| Taiwan | 7 |
| Canada | 6 |
| United Kingdom | 6 |
| California | 5 |
| United States | 5 |
| More ▼ | |
Laws, Policies, & Programs
| Elementary and Secondary… | 1 |
| Race to the Top | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Sun-Joo Cho; Goodwin Amanda; Jorge Salas; Sophia Mueller – Grantee Submission, 2025
This study incorporates a random forest (RF) approach to probe complex interactions and nonlinearity among predictors into an item response model with the goal of using a hybrid approach to outperform either an RF or explanatory item response model (EIRM) only in explaining item responses. In the specified model, called EIRM-RF, predicted values…
Descriptors: Item Response Theory, Artificial Intelligence, Statistical Analysis, Predictor Variables
Wang, Lu; Steedle, Jeffrey – ACT, Inc., 2020
In recent ACT mode comparability studies, students testing on laptop or desktop computers earned slightly higher scores on average than students who tested on paper, especially on the ACT® reading and English tests (Li et al., 2017). Equating procedures adjust for such "mode effects" to make ACT scores comparable regardless of testing…
Descriptors: Test Format, Reading Tests, Language Tests, English
Ganzfried, Sam; Yusuf, Farzana – Education Sciences, 2018
A problem faced by many instructors is that of designing exams that accurately assess the abilities of the students. Typically, these exams are prepared several days in advance, and generic question scores are used based on rough approximation of the question difficulty and length. For example, for a recent class taught by the author, there were…
Descriptors: Weighted Scores, Test Construction, Student Evaluation, Multiple Choice Tests
Davison, Mark L.; Biancarosa, Gina; Carlson, Sarah E.; Seipel, Ben; Liu, Bowen – Assessment for Effective Intervention, 2018
The computer-administered Multiple-Choice Online Causal Comprehension Assessment (MOCCA) for Grades 3 to 5 has an innovative, 40-item multiple-choice structure in which each distractor corresponds to a comprehension process upon which poor comprehenders have been shown to rely. This structure requires revised thinking about measurement issues…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Pilot Projects, Measurement
Davison, Mark L.; Biancarosa, Gina; Carlson, Sarah E.; Seipel, Ben; Liu, Bowen – Grantee Submission, 2018
The computer-administered Multiple-Choice Online Causal Comprehension Assessment (MOCCA) for Grades 3 to 5 has an innovative, 40-item multiple-choice structure in which each distractor corresponds to a comprehension process upon which poor comprehenders have been shown to rely. This structure requires revised thinking about measurement issues…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Pilot Projects, Measurement
Cohen, Yoav; Levi, Effi; Ben-Simon, Anat – Applied Measurement in Education, 2018
In the current study, two pools of 250 essays, all written as a response to the same prompt, were rated by two groups of raters (14 or 15 raters per group), thereby providing an approximation to the essay's true score. An automated essay scoring (AES) system was trained on the datasets and then scored the essays using a cross-validation scheme. By…
Descriptors: Test Validity, Automation, Scoring, Computer Assisted Testing
Öz, Hüseyin; Özturan, Tuba – Journal of Language and Linguistic Studies, 2018
This article reports the findings of a study that sought to investigate whether computer-based vs. paper-based test-delivery mode has an impact on the reliability and validity of an achievement test for a pedagogical content knowledge course in an English teacher education program. A total of 97 university students enrolled in the English as a…
Descriptors: Computer Assisted Testing, Testing, Test Format, Teaching Methods
Barabadi, Elyas; Khajavy, Gholam Hassan; Kamrood, Ali Mehri – International Journal of Instruction, 2018
The current study reports the results of a project aimed at assessing L2 listening comprehension by drawing on two approaches to dynamic assessment: interventionist and interactionist. The former approach was actualized by providing two graduated hints which were fixed and standardized for all test takers while the latter was actualized by asking…
Descriptors: Intervention, Interaction, Listening Comprehension, Listening Comprehension Tests
Pujayanto, Pujayanto; Budiharti, Rini; Adhitama, Egy; Nuraini, Niken Rizky Amalia; Putri, Hanung Vernanda – Physics Education, 2018
This research proposes the development of a web-based assessment system to identify students' misconception. The system, named WAS (web-based assessment system), can identify students' misconception profile on linear kinematics automatically after the student has finished the test. The test instrument was developed and validated. Items were…
Descriptors: Misconceptions, Physics, Science Instruction, Databases
Ting, Mu Yu – EURASIA Journal of Mathematics, Science & Technology Education, 2017
Using the capabilities of expert knowledge structures, the researcher prepared test questions on the university calculus topic of "finding the area by integration." The quiz is divided into two types of multiple choice items (one out of four and one out of many). After the calculus course was taught and tested, the results revealed that…
Descriptors: Calculus, Mathematics Instruction, College Mathematics, Multiple Choice Tests
Li, Dongmei; Yi, Qing; Harris, Deborah – ACT, Inc., 2017
In preparation for online administration of the ACT® test, ACT conducted studies to examine the comparability of scores between online and paper administrations, including a timing study in fall 2013, a mode comparability study in spring 2014, and a second mode comparability study in spring 2015. This report presents major findings from these…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Comparative Analysis, Test Format
Mao, Liyang; Liu, Ou Lydia; Roohr, Katrina; Belur, Vinetha; Mulholland, Matthew; Lee, Hee-Sun; Pallant, Amy – Educational Assessment, 2018
Scientific argumentation is one of the core practices for teachers to implement in science classrooms. We developed a computer-based formative assessment to support students' construction and revision of scientific arguments. The assessment is built upon automated scoring of students' arguments and provides feedback to students and teachers.…
Descriptors: Computer Assisted Testing, Science Tests, Scoring, Automation
Ozsoy, Seyma Nur; Kilmen, Sevilay – International Journal of Assessment Tools in Education, 2023
In this study, Kernel test equating methods were compared under NEAT and NEC designs. In NEAT design, Kernel post-stratification and chain equating methods taking into account optimal and large bandwidths were compared. In the NEC design, gender and/or computer/tablet use was considered as a covariate, and Kernel test equating methods were…
Descriptors: Equated Scores, Testing, Test Items, Statistical Analysis
von Davier, Matthias – Quality Assurance in Education: An International Perspective, 2018
Purpose: Surveys that include skill measures may suffer from additional sources of error compared to those containing questionnaires alone. Examples are distractions such as noise or interruptions of testing sessions, as well as fatigue or lack of motivation to succeed. This paper aims to provide a review of statistical tools based on latent…
Descriptors: Statistical Analysis, Surveys, International Assessment, Error Patterns
He, Lianzhen; Min, Shangchao – Language Assessment Quarterly, 2017
The first aim of this study was to develop a computer adaptive EFL test (CALT) that assesses test takers' listening and reading proficiency in English with dichotomous items and polytomous testlets. We reported in detail on the development of the CALT, including item banking, determination of suitable item response theory (IRT) models for item…
Descriptors: Computer Assisted Testing, Adaptive Testing, English (Second Language), Second Language Learning

Peer reviewed
Direct link
