Publication Date
In 2025 | 1 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 8 |
Descriptor
Computer Assisted Testing | 12 |
Prediction | 12 |
Test Construction | 12 |
Test Items | 8 |
Adaptive Testing | 3 |
Evaluation Methods | 3 |
Multiple Choice Tests | 3 |
Simulation | 3 |
Statistical Analysis | 3 |
Test Validity | 3 |
Accuracy | 2 |
More ▼ |
Source
ETS Research Report Series | 1 |
Education Sciences | 1 |
Field Methods | 1 |
International Journal of… | 1 |
Journal of Learning Analytics | 1 |
Literacy Research and… | 1 |
Participatory Educational… | 1 |
Research & Practice in… | 1 |
Author
Abayeva, Nella F. | 1 |
Andrade, Maureen Snow | 1 |
Attali, Yigal | 1 |
Burdick, Don | 1 |
Burdick, Hal | 1 |
Chenglu Li | 1 |
Davey, Tim | 1 |
Fitzgerald, Jill | 1 |
Ganzfried, Sam | 1 |
Golovachyova, Viktoriya N. | 1 |
Güzeller, Cem Oktay | 1 |
More ▼ |
Publication Type
Reports - Research | 10 |
Journal Articles | 8 |
Reports - Evaluative | 2 |
Speeches/Meeting Papers | 2 |
Education Level
Higher Education | 2 |
Middle Schools | 2 |
Postsecondary Education | 2 |
Elementary Education | 1 |
Elementary Secondary Education | 1 |
High Schools | 1 |
Junior High Schools | 1 |
Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Flesch Kincaid Grade Level… | 1 |
Graduate Record Examinations | 1 |
National Assessment of… | 1 |
What Works Clearinghouse Rating
Stenger, Rachel; Olson, Kristen; Smyth, Jolene D. – Field Methods, 2023
Questionnaire designers use readability measures to ensure that questions can be understood by the target population. The most common measure is the Flesch-Kincaid Grade level, but other formulas exist. This article compares six different readability measures across 150 questions in a self-administered questionnaire, finding notable variation in…
Descriptors: Readability, Readability Formulas, Computer Assisted Testing, Evaluation Methods
Hai Li; Wanli Xing; Chenglu Li; Wangda Zhu; Simon Woodhead – Journal of Learning Analytics, 2025
Knowledge tracing (KT) is a method to evaluate a student's knowledge state (KS) based on their historical problem-solving records by predicting the next answer's binary correctness. Although widely applied to closed-ended questions, it lacks a detailed option tracing (OT) method for assessing multiple-choice questions (MCQs). This paper introduces…
Descriptors: Mathematics Tests, Multiple Choice Tests, Computer Assisted Testing, Problem Solving
Yurtcu, Meltem; Güzeller, Cem Oktay – Participatory Educational Research, 2021
The items that are suitable for everyone's own ability level with the support of computer programs instead of paper and pencil tests may help students to reach more accurate results. Computer adaptive tests (CAT), which are developed based on certain assumptions in this direction, are to create an optimum test for every person taking the exam. It…
Descriptors: Bibliometrics, Computer Assisted Testing, Computer Software, Test Construction
Miller, Ronald Mellado; Andrade, Maureen Snow – Research & Practice in Assessment, 2020
Technology use is increasing in higher education, particularly for test administration. In this study, Capaldi's (1994) sequential theory, which postulates that the specific order of reinforcements and nonreinforcements influences persistence in the face of difficulty or failure, was applied to online multiple choice testing situations in regard…
Descriptors: Computer Assisted Testing, Higher Education, Multiple Choice Tests, Test Format
Ganzfried, Sam; Yusuf, Farzana – Education Sciences, 2018
A problem faced by many instructors is that of designing exams that accurately assess the abilities of the students. Typically, these exams are prepared several days in advance, and generic question scores are used based on rough approximation of the question difficulty and length. For example, for a recent class taught by the author, there were…
Descriptors: Weighted Scores, Test Construction, Student Evaluation, Multiple Choice Tests
Golovachyova, Viktoriya N.; Menlibekova, Gulbakhyt Zh.; Abayeva, Nella F.; Ten, Tatyana L.; Kogaya, Galina D. – International Journal of Environmental and Science Education, 2016
Using computer-based monitoring systems that rely on tests could be the most effective way of knowledge evaluation. The problem of objective knowledge assessment by means of testing takes on a new dimension in the context of new paradigms in education. The analysis of the existing test methods enabled us to conclude that tests with selected…
Descriptors: Expertise, Computer Assisted Testing, Student Evaluation, Knowledge Level
Attali, Yigal – ETS Research Report Series, 2014
Previous research on calculator use in standardized assessments of quantitative ability focused on the effect of calculator availability on item difficulty and on whether test developers can predict these effects. With the introduction of an on-screen calculator on the Quantitative Reasoning measure of the "GRE"® revised General Test, it…
Descriptors: College Entrance Examinations, Graduate Study, Calculators, Test Items
Burdick, Hal; Swartz, Carl W.; Stenner, A. Jackson; Fitzgerald, Jill; Burdick, Don; Hanlon, Sean T. – Literacy Research and Instruction, 2013
The purpose of the study was to explore the validity of a novel computer-analytic developmental scale, the Writing Ability Developmental Scale. On the whole, collective results supported the validity of the scale. It was sensitive to writing ability differences across grades and sensitive to within-grade variability as compared to human-rated…
Descriptors: Test Validity, Writing Skills, Computer Assisted Testing, Prediction
Davey, Tim; Pommerich, Mary; Thompson, Tony D. – 1999
In computerized adaptive testing (CAT), new or experimental items are frequently administered alongside operational tests to gather the pretest data needed to replenish and replace item pools. The two basic strategies used to combine pretest and operational items are embedding and appending. Variable-length CATs are preferred because of the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Measurement Techniques
Halkitis, Perry N.; And Others – 1996
The relationship between test item characteristics and testing time was studied for a computer-administered licensing examination. One objective of the study was to develop a model to predict testing time on the basis of known item characteristics. Response latencies (i.e., the amount of time taken by examinees to read, review, and answer items)…
Descriptors: Computer Assisted Testing, Difficulty Level, Estimation (Mathematics), Licensing Examinations (Professions)
Wainer, Howard; And Others – 1990
The initial development of a testlet-based algebra test was previously reported (Wainer and Lewis, 1990). This account provides the details of this excursion into the use of hierarchical testlets and validity-based scoring. A pretest of two 15-item hierarchical testlets was carried out in which examinees' performance on a 4-item subset of each…
Descriptors: Adaptive Testing, Algebra, Comparative Analysis, Computer Assisted Testing
Pine, Steven M.; Weiss, David J. – 1978
This report examines how selection fairness is influenced by the characteristics of a selection instrument in terms of its distribution of item difficulties, level of item discrimination, degree of item bias, and testing strategy. Computer simulation was used in the administration of either a conventional or Bayesian adaptive ability test to a…
Descriptors: Adaptive Testing, Bayesian Statistics, Comparative Testing, Computer Assisted Testing