Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 4 |
Descriptor
Adaptive Testing | 13 |
Difficulty Level | 13 |
Scores | 13 |
Computer Assisted Testing | 10 |
Test Items | 8 |
Test Construction | 5 |
Item Response Theory | 4 |
Scoring | 4 |
Higher Education | 3 |
College Students | 2 |
Comparative Testing | 2 |
More ▼ |
Source
Author
Wise, Steven L. | 2 |
Andrich, David | 1 |
Bridgeman, Brent | 1 |
Cline, Frederick | 1 |
Dirkx, K. J. H. | 1 |
Farrokhlagha Heidari | 1 |
Gerstenberg, Friederike X. R. | 1 |
Jarodzka, H. | 1 |
Kim, Sooyeon | 1 |
Lord, Frederic M. | 1 |
Maliheh Izadi | 1 |
More ▼ |
Publication Type
Journal Articles | 9 |
Reports - Research | 9 |
Speeches/Meeting Papers | 3 |
Reports - Evaluative | 2 |
Guides - Non-Classroom | 1 |
Education Level
High Schools | 1 |
Higher Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Raven Progressive Matrices | 1 |
What Works Clearinghouse Rating
Mehri Izadi; Maliheh Izadi; Farrokhlagha Heidari – Education and Information Technologies, 2024
In today's environment of growing class sizes due to the prevalence of online and e-learning systems, providing one-to-one instruction and feedback has become a challenging task for teachers. Anyhow, the dialectical integration of instruction and assessment into a seamless and dynamic activity can provide a continuous flow of assessment…
Descriptors: Adaptive Testing, Computer Assisted Testing, English (Second Language), Second Language Learning
Designing Computer-Based Tests: Design Guidelines from Multimedia Learning Studied with Eye Tracking
Dirkx, K. J. H.; Skuballa, I.; Manastirean-Zijlstra, C. S.; Jarodzka, H. – Instructional Science: An International Journal of the Learning Sciences, 2021
The use of computer-based tests (CBTs), for both formative and summative purposes, has greatly increased over the past years. One major advantage of CBTs is the easy integration of multimedia. It is unclear, though, how to design such CBT environments with multimedia. The purpose of the current study was to examine whether guidelines for designing…
Descriptors: Test Construction, Computer Assisted Testing, Multimedia Instruction, Eye Movements
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook Henry – ETS Research Report Series, 2015
The purpose of this inquiry was to investigate the effectiveness of item response theory (IRT) proficiency estimators in terms of estimation bias and error under multistage testing (MST). We chose a 2-stage MST design in which 1 adaptation to the examinees' ability levels takes place. It includes 4 modules (1 at Stage 1, 3 at Stage 2) and 3 paths…
Descriptors: Item Response Theory, Computation, Statistical Bias, Error of Measurement
Ortner, Tuulia M.; Weisskopf, Eva; Gerstenberg, Friederike X. R. – European Journal of Psychology of Education, 2013
We investigated students' metacognitive experiences with regard to feelings of difficulty (FD), feelings of satisfaction (FS), and estimate of effort (EE), employing either computerized adaptive testing (CAT) or computerized fixed item testing (FIT). In an experimental approach, 174 students in grades 10 to 13 were tested either with a CAT or a…
Descriptors: Adaptive Testing, Feedback (Response), High School Students, Metacognition

Stocking, Martha L. – Journal of Educational and Behavioral Statistics, 1996
An alternative method for scoring adaptive tests, based on number-correct scores, is explored and compared with a method that relies more directly on item response theory. Using the number-correct score with necessary adjustment for intentional differences in adaptive test difficulty is a statistically viable scoring method. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Item Response Theory

Mosenthal, Peter B. – American Educational Research Journal, 1998
The extent to which variables from a previous study (P. Mosenthal, 1996) on document processing influenced difficulty on 165 tasks from the pose scales of five national adult literacy scales was studied. Three process variables accounted for 78% of the variance when prose task difficulty was defined using level scores. Implications for computer…
Descriptors: Adaptive Testing, Adults, Computer Assisted Testing, Definitions

Wheeler, Patricia H. – 1995
When individuals are given tests that are too hard or too easy, the resulting scores are likely to be poor estimates of their performance. To get valid and accurate test scores that provide meaningful results, one should use functional-level testing (FLT). FLT is the practice of administering to an individual a version of a test with a difficulty…
Descriptors: Adaptive Testing, Difficulty Level, Educational Assessment, Performance
Bridgeman, Brent; Cline, Frederick – Journal of Educational Measurement, 2004
Time limits on some computer-adaptive tests (CATs) are such that many examinees have difficulty finishing, and some examinees may be administered tests with more time-consuming items than others. Results from over 100,000 examinees suggested that about half of the examinees must guess on the final six questions of the analytical section of the…
Descriptors: Guessing (Tests), Timed Tests, Adaptive Testing, Computer Assisted Testing
Wise, Steven L.; And Others – 1997
The degree to which item review on a computerized adaptive test (CAT) could be used by examinees to inflate their scores artificially was studied. G. G. Kingsbury (1996) described a strategy in which examinees could use the changes in item difficulty during a CAT to determine which items' answers are incorrect and should be changed during item…
Descriptors: Achievement Gains, Adaptive Testing, College Students, Computer Assisted Testing

Styles, Irene; Andrich, David – Educational and Psychological Measurement, 1993
This paper describes the use of the Rasch model to help implement computerized administration of the standard and advanced forms of Raven's Progressive Matrices (RPM), to compare relative item difficulties, and to convert scores between the standard and advanced forms. The sample consisted of 95 girls and 95 boys in Australia. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Elementary Education
Lord, Frederic M. – 1971
Some stochastic approximation procedures are considered in relation to the problem of choosing a sequence of test questions to accurately estimate a given examinee's standing on a psychological dimension. Illustrations are given evaluating certain procedures in a specific context. (Author/CK)
Descriptors: Academic Ability, Adaptive Testing, Computer Programs, Difficulty Level
Vispoel, Walter P.; And Others – 1992
The effects of review options (the opportunity for examinees to review and change answers) on the magnitude, reliability, efficiency, and concurrent validity of scores obtained from three types of computerized vocabulary tests (fixed item, adaptive, and self-adapted) were studied. Subjects were 97 college students at a large midwestern university…
Descriptors: Adaptive Testing, College Students, Comparative Testing, Computer Assisted Testing

Wise, Steven L.; And Others – Journal of Educational Measurement, 1992
Performance of 156 undergraduate and 48 graduate students on a self-adapted test (SFAT)--students choose the difficulty level of their test items--was compared with performance on a computer-adapted test (CAT). Those taking the SFAT obtained higher ability scores and reported lower posttest state anxiety than did CAT takers. (SLD)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Difficulty Level