Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 7 |
Since 2006 (last 20 years) | 11 |
Descriptor
Computer Assisted Testing | 23 |
Difficulty Level | 23 |
Scoring | 23 |
Test Items | 12 |
Adaptive Testing | 8 |
Test Construction | 8 |
Item Response Theory | 7 |
Foreign Countries | 5 |
Scores | 5 |
Test Format | 5 |
Higher Education | 4 |
More ▼ |
Source
Author
Publication Type
Journal Articles | 11 |
Reports - Research | 11 |
Reports - Evaluative | 7 |
Speeches/Meeting Papers | 5 |
Reports - Descriptive | 3 |
Dissertations/Theses -… | 2 |
Tests/Questionnaires | 1 |
Education Level
Elementary Education | 4 |
Elementary Secondary Education | 2 |
Secondary Education | 2 |
Grade 5 | 1 |
Grade 6 | 1 |
High Schools | 1 |
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
ACTFL Oral Proficiency… | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Betts, Joe; Muntean, William; Kim, Doyoung; Kao, Shu-chuan – Educational and Psychological Measurement, 2022
The multiple response structure can underlie several different technology-enhanced item types. With the increased use of computer-based testing, multiple response items are becoming more common. This response type holds the potential for being scored polytomously for partial credit. However, there are several possible methods for computing raw…
Descriptors: Scoring, Test Items, Test Format, Raw Scores
Parker, Mark A. J.; Hedgeland, Holly; Jordan, Sally E.; Braithwaite, Nicholas St. J. – European Journal of Science and Mathematics Education, 2023
The study covers the development and testing of the alternative mechanics survey (AMS), a modified force concept inventory (FCI), which used automatically marked free-response questions. Data were collected over a period of three academic years from 611 participants who were taking physics classes at high school and university level. A total of…
Descriptors: Test Construction, Scientific Concepts, Physics, Test Reliability
Chan, Kinnie Kin Yee; Bond, Trevor; Yan, Zi – Language Testing, 2023
We investigated the relationship between the scores assigned by an Automated Essay Scoring (AES) system, the Intelligent Essay Assessor (IEA), and grades allocated by trained, professional human raters to English essay writing by instigating two procedures novel to written-language assessment: the logistic transformation of AES raw scores into…
Descriptors: Computer Assisted Testing, Essays, Scoring, Scores
Reddick, Rachel – International Educational Data Mining Society, 2019
One significant challenge in the field of measuring ability is measuring the current ability of a learner while they are learning. Many forms of inference become computationally complex in the presence of time-dependent learner ability, and are not feasible to implement in an online context. In this paper, we demonstrate an approach which can…
Descriptors: Measurement Techniques, Mathematics, Assignments, Learning
Miyamoto, Mayu – ProQuest LLC, 2019
Despite an emphasis on oral communication in most foreign language classrooms, the resource-intensive nature (i.e. time and manpower) of speaking tests hinder regular oral assessments. A possible solution is the development of a (semi-) automated scoring system. When it is used in conjunction with human raters, the consistency of computers can…
Descriptors: Second Language Learning, Speech Communication, Oral Language, Foreign Countries
Eckerly, Carol; Smith, Russell; Sowles, John – Practical Assessment, Research & Evaluation, 2018
The Discrete Option Multiple Choice (DOMC) item format was introduced by Foster and Miller (2009) with the intent of improving the security of test content. However, by changing the amount and order of the content presented, the test taking experience varies by test taker, thereby introducing potential fairness issues. In this paper we…
Descriptors: Culture Fair Tests, Multiple Choice Tests, Testing, Test Items
Brinkhuis, Matthieu J. S.; Savi, Alexander O.; Hofman, Abe D.; Coomans, Frederik; van der Maas, Han L. J.; Maris, Gunter – Journal of Learning Analytics, 2018
With the advent of computers in education, and the ample availability of online learning and practice environments, enormous amounts of data on learning become available. The purpose of this paper is to present a decade of experience with analyzing and improving an online practice environment for math, which has thus far recorded over a billion…
Descriptors: Data Analysis, Mathematics Instruction, Accuracy, Reaction Time
Mitchell, Alison M.; Truckenmiller, Adrea; Petscher, Yaacov – Communique, 2015
As part of the Race to the Top initiative, the United States Department of Education made nearly 1 billion dollars available in State Educational Technology grants with the goal of ramping up school technology. One result of this effort is that states, districts, and schools across the country are using computerized assessments to measure their…
Descriptors: Computer Assisted Testing, Educational Technology, Testing, Efficiency
Thompson, Carrie A. – ProQuest LLC, 2013
The Missionary Training Center (MTC), affiliated with the Church of Jesus Christ of Latter-day Saints, needs a reliable and cost effective way to measure the oral language proficiency of missionaries learning Spanish. The MTC needed to measure incoming missionaries' Spanish language proficiency for training and classroom assignment as well as to…
Descriptors: Religious Cultural Groups, Second Language Learning, Second Language Instruction, Interviews
Klinkenberg, S.; Straatemeier, M.; van der Maas, H. L. J. – Computers & Education, 2011
In this paper we present a model for computerized adaptive practice and monitoring. This model is used in the Maths Garden, a web-based monitoring system, which includes a challenging web environment for children to practice arithmetic. Using a new item response model based on the Elo (1978) rating system and an explicit scoring rule, estimates of…
Descriptors: Test Items, Reaction Time, Scoring, Probability
Hung, Pi-Hsia; Lin, Yu-Fen; Hwang, Gwo-Jen – Educational Technology & Society, 2010
Ubiquitous computing and mobile technologies provide a new perspective for designing innovative outdoor learning experiences. The purpose of this study is to propose a formative assessment design for integrating PDAs into ecology observations. Three learning activities were conducted in this study. An action research approach was applied to…
Descriptors: Foreign Countries, Feedback (Response), Action Research, Observation

Stocking, Martha L. – Journal of Educational and Behavioral Statistics, 1996
An alternative method for scoring adaptive tests, based on number-correct scores, is explored and compared with a method that relies more directly on item response theory. Using the number-correct score with necessary adjustment for intentional differences in adaptive test difficulty is a statistically viable scoring method. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Item Response Theory
Stocking, Martha L. – 1994
Modern applications of computerized adaptive testing (CAT) are typically grounded in item response theory (IRT; Lord, 1980). While the IRT foundations of adaptive testing provide a number of approaches to adaptive test scoring that may seem natural and efficient to psychometricians, these approaches may be more demanding for test takers, test…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Equated Scores
Schnipke, Deborah L.; Reese, Lynda M. – 1997
Two-stage and multistage test designs provide a way of roughly adapting item difficulty to test-taker ability. All test takers take a parallel stage-one test, and, based on their scores, they are routed to tests of different difficulty levels in subsequent stages. These designs provide some of the benefits of standard computerized adaptive testing…
Descriptors: Ability, Adaptive Testing, Algorithms, Comparative Analysis
Harris, Dickie A.; Penell, Roger J. – 1977
This study used a series of simulations to answer questions about the efficacy of adaptive testing raised by empirical studies. The first study showed that for reasonable high entry points, parameters estimated from paper-and-pencil test protocols cross-validated remarkably well to groups actually tested at a computer terminal. This suggested that…
Descriptors: Adaptive Testing, Computer Assisted Testing, Cost Effectiveness, Difficulty Level
Previous Page | Next Page ยป
Pages: 1 | 2