Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 8 |
Since 2006 (last 20 years) | 8 |
Descriptor
Source
Grantee Submission | 3 |
AERA Online Paper Repository | 1 |
Applied Measurement in… | 1 |
Educational and Psychological… | 1 |
International Baltic… | 1 |
International Educational… | 1 |
Mathematics Education… | 1 |
Research-publishing.net | 1 |
Author
Wise, Steven L. | 4 |
Anderson, Paul S. | 3 |
Bergstrom, Betty | 2 |
Hyers, Albert D. | 2 |
Ito, Kyoko | 2 |
Roos, Linda L. | 2 |
Schnipke, Deborah L. | 2 |
Sykes, Robert C. | 2 |
Ackerman, Terry A. | 1 |
Adam C. Sales | 1 |
Adjei, Seth A. | 1 |
More ▼ |
Publication Type
Speeches/Meeting Papers | 48 |
Reports - Research | 31 |
Reports - Evaluative | 14 |
Journal Articles | 2 |
Opinion Papers | 2 |
Guides - Non-Classroom | 1 |
Reports - Descriptive | 1 |
Education Level
Middle Schools | 3 |
Elementary Education | 2 |
Intermediate Grades | 2 |
Grade 4 | 1 |
Grade 5 | 1 |
Grade 6 | 1 |
Higher Education | 1 |
Junior High Schools | 1 |
Postsecondary Education | 1 |
Secondary Education | 1 |
Audience
Researchers | 6 |
Practitioners | 1 |
Teachers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 2 |
ACT Assessment | 1 |
Graduate Record Examinations | 1 |
Raven Progressive Matrices | 1 |
Test Anxiety Inventory | 1 |
What Works Clearinghouse Rating

Andreea Dutulescu; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Assessing the difficulty of reading comprehension questions is crucial to educational methodologies and language understanding technologies. Traditional methods of assessing question difficulty rely frequently on human judgments or shallow metrics, often failing to accurately capture the intricate cognitive demands of answering a question. This…
Descriptors: Difficulty Level, Reading Tests, Test Items, Reading Comprehension
Ashish Gurung; Kirk Vanacore; Andrew A. McReynolds; Korinn S. Ostrow; Eamon S. Worden; Adam C. Sales; Neil T. Heffernan – Grantee Submission, 2024
Learning experience designers consistently balance the trade-off between open and close-ended activities. The growth and scalability of Computer Based Learning Platforms (CBLPs) have only magnified the importance of these design trade-offs. CBLPs often utilize close-ended activities (i.e. Multiple-Choice Questions [MCQs]) due to feasibility…
Descriptors: Multiple Choice Tests, Testing, Test Format, Computer Assisted Testing
Reddick, Rachel – International Educational Data Mining Society, 2019
One significant challenge in the field of measuring ability is measuring the current ability of a learner while they are learning. Many forms of inference become computationally complex in the presence of time-dependent learner ability, and are not feasible to implement in an online context. In this paper, we demonstrate an approach which can…
Descriptors: Measurement Techniques, Mathematics, Assignments, Learning
Juškaite, Loreta – International Baltic Symposium on Science and Technology Education, 2019
The new research results on the online- testing method in the Latvian education system for a learning process assessment are presented. Data mining is a very important field in education because it helps to analyse the data gathered in various researches and to implement the changes in the education system according to the learning methods of…
Descriptors: Foreign Countries, Information Retrieval, Data Analysis, Data Use
Zesch, Torsten; Horbach, Andrea; Melanie Goggin, Melanie; Wrede-Jackes, Jennifer – Research-publishing.net, 2018
We present a tool for the creation and curation of C-tests. C-tests are an established tool in language proficiency testing and language learning. They require examinees to complete a text in which the second half of every second word is replaced by a gap. We support teachers and test designers in creating such tests through a web-based system…
Descriptors: Language Tests, Language Proficiency, Second Language Learning, Second Language Instruction
Coupland, Mary; Solina, Danica; Cave, Gregory E. – Mathematics Education Research Group of Australasia, 2017
In this paper, we report on developments in the Mastery Learning (ML) curriculum and assessment model that has been successfully implemented in a metropolitan university for teaching first-year mathematics. Initial responses to ML were positive; however, we ask whether the nature of the ML tests encourages a focus on shallow learning of…
Descriptors: Foreign Countries, Mastery Learning, College Freshmen, Engineering Education
Molnar, Gyongyver; Hodi, Agnes; Magyar, Andrea – AERA Online Paper Repository, 2016
Vocabulary knowledge assessment methods and instruments have gone through a significant evolution. Computer-based tests offer more opportunities than their paper-and-pencil counterparts, however, most digital vocabulary assessments are linear and adaptive solutions in this domain are scarce. The aims of this study were to compare the effectiveness…
Descriptors: Adaptive Testing, Vocabulary Skills, Computer Assisted Testing, Student Evaluation
Adjei, Seth A.; Botelho, Anthony F.; Heffernan, Neil T. – Grantee Submission, 2016
Prerequisite skill structures have been closely studied in past years leading to many data-intensive methods aimed at refining such structures. While many of these proposed methods have yielded success, defining and refining hierarchies of skill relationships are often difficult tasks. The relationship between skills in a graph could either be…
Descriptors: Prediction, Learning Analytics, Attribution Theory, Prerequisites
Lau, C. Allen; Wang, Tianyou – 1999
A study was conducted to extend the sequential probability ratio testing (SPRT) procedure with the polytomous model under some practical constraints in computerized classification testing (CCT), such as methods to control item exposure rate, and to study the effects of other variables, including item information algorithms, test difficulties, item…
Descriptors: Algorithms, Computer Assisted Testing, Difficulty Level, Item Banks
Zhu, Daming; Fan, Meichu – 1999
The convention for selecting starting points (that is, initial items) on a computerized adaptive test (CAT) is to choose as starting points items of medium difficulty for all examinees. Selecting a starting point based on prior information about an individual's ability was first suggested many years ago, but has been believed unimportant provided…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Difficulty Level
Linacre, John Michael – 1988
Computer-adaptive testing (CAT) allows improved security, greater scoring accuracy, shorter testing periods, quicker availability of results, and reduced guessing and other undesirable test behavior. Simple approaches can be applied by the classroom teacher, or other content specialist, who possesses simple computer equipment and elementary…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Cutting Scores
Gershon, Richard; Bergstrom, Betty – 1995
When examinees are allowed to review responses on an adaptive test, can they "cheat" the adaptive algorithm in order to take an easier test and improve their performance? Theoretically, deliberately answering items incorrectly will lower the examinee ability estimate and easy test items will be administered. If review is then allowed,…
Descriptors: Adaptive Testing, Algorithms, Cheating, Computer Assisted Testing
Thorndike, Robert L. – 1983
In educational testing, one is concerned to get as much information as possible about a given examinee from each minute of testing time. Maximum information is obtained when the difficulty of each test exercise matches the estimated ability level of the examinee. The goal of adaptive testing is to accomplish this. Adaptive patterns are reviewed…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Latent Trait Theory
Lee, Jo Ann; And Others – 1984
The difficulty of test items administered by paper and pencil were compared with the difficulty of the same items administered by computer. The study was conducted to determine if an interaction exists between mode of test administration and ability. An arithmetic reasoning test was constructed for this study. All examinees had taken the Armed…
Descriptors: Adults, Comparative Analysis, Computer Assisted Testing, Difficulty Level
Making Use of Response Times in Standardized Tests: Are Accuracy and Speed Measuring the Same Thing?
Scrams, David J.; Schnipke, Deborah L. – 1997
Response accuracy and response speed provide separate measures of performance. Psychometricians have tended to focus on accuracy with the goal of characterizing examinees on the basis of their ability to respond correctly to items from a given content domain. With the advent of computerized testing, response times can now be recorded unobtrusively…
Descriptors: Computer Assisted Testing, Difficulty Level, Item Response Theory, Psychometrics