NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 31 to 45 of 115 results Save | Export
Stone, Elizabeth; Davey, Tim – Educational Testing Service, 2011
There has been an increased interest in developing computer-adaptive testing (CAT) and multistage assessments for K-12 accountability assessments. The move to adaptive testing has been met with some resistance by those in the field of special education who express concern about routing of students with divergent profiles (e.g., some students with…
Descriptors: Disabilities, Adaptive Testing, Accountability, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Klinkenberg, S.; Straatemeier, M.; van der Maas, H. L. J. – Computers & Education, 2011
In this paper we present a model for computerized adaptive practice and monitoring. This model is used in the Maths Garden, a web-based monitoring system, which includes a challenging web environment for children to practice arithmetic. Using a new item response model based on the Elo (1978) rating system and an explicit scoring rule, estimates of…
Descriptors: Test Items, Reaction Time, Scoring, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Kingsbury, G. Gage; Wise, Steven L. – Journal of Applied Testing Technology, 2011
Development of adaptive tests used in K-12 settings requires the creation of stable measurement scales to measure the growth of individual students from one grade to the next, and to measure change in groups from one year to the next. Accountability systems like No Child Left Behind require stable measurement scales so that accountability has…
Descriptors: Elementary Secondary Education, Adaptive Testing, Academic Achievement, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Wauters, K.; Desmet, P.; Van den Noortgate, W. – Journal of Computer Assisted Learning, 2010
The popularity of intelligent tutoring systems (ITSs) is increasing rapidly. In order to make learning environments more efficient, researchers have been exploring the possibility of an automatic adaptation of the learning environment to the learner or the context. One of the possible adaptation techniques is adaptive item sequencing by matching…
Descriptors: Knowledge Level, Adaptive Testing, Test Items, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Chatzopoulou, D. I.; Economides, A. A. – Journal of Computer Assisted Learning, 2010
This paper presents Programming Adaptive Testing (PAT), a Web-based adaptive testing system for assessing students' programming knowledge. PAT was used in two high school programming classes by 73 students. The question bank of PAT is composed of 443 questions. A question is classified in one out of three difficulty levels. In PAT, the levels of…
Descriptors: Student Evaluation, Prior Learning, Programming, High School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Vuk, Jasna; Morse, David T. – Research in the Schools, 2009
In this study we observed college students' behavior on two self-tailored, multiple-choice exams. Self-tailoring was defined as an option to omit up to five items from being scored on an exam. Participants, 80 undergraduate college students enrolled in two sections of an educational psychology course, statistically significantly improved their…
Descriptors: College Students, Educational Psychology, Academic Achievement, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Cheng, Ying; Chang, Hua-Hua; Yi, Qing – Applied Psychological Measurement, 2007
Content balancing is an important issue in the design and implementation of computerized adaptive testing (CAT). Content-balancing techniques that have been applied in fixed content balancing, where the number of items from each content area is fixed, include constrained CAT (CCAT), the modified multinomial model (MMM), modified constrained CAT…
Descriptors: Adaptive Testing, Item Analysis, Computer Assisted Testing, Item Response Theory
Peer reviewed Peer reviewed
Stocking, Martha L.; Lewis, Charles – Journal of Educational and Behavioral Statistics, 1998
Ensuring item and pool security in a continuous testing environment is explored through a new method of controlling exposure rate of items conditional on ability level in computerized testing. Properties of this conditional control on exposure rate, when used in conjunction with a particular adaptive testing algorithm, are explored using simulated…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Difficulty Level
Hambleton, Ronald K.; Sireci, Stephen G.; Swaminathan, H.; Xing, Dehui; Rizavi, Saba – 2003
The purposes of this research study were to develop and field test anchor-based judgmental methods for enabling test specialists to estimate item difficulty statistics. The study consisted of three related field tests. In each, researchers worked with six Law School Admission Test (LSAT) test specialists and one or more of the LSAT subtests. The…
Descriptors: Adaptive Testing, College Entrance Examinations, Computer Assisted Testing, Difficulty Level
Zhu, Daming; Fan, Meichu – 1999
The convention for selecting starting points (that is, initial items) on a computerized adaptive test (CAT) is to choose as starting points items of medium difficulty for all examinees. Selecting a starting point based on prior information about an individual's ability was first suggested many years ago, but has been believed unimportant provided…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Difficulty Level
Peer reviewed Peer reviewed
Stocking, Martha L. – Journal of Educational and Behavioral Statistics, 1996
An alternative method for scoring adaptive tests, based on number-correct scores, is explored and compared with a method that relies more directly on item response theory. Using the number-correct score with necessary adjustment for intentional differences in adaptive test difficulty is a statistically viable scoring method. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Item Response Theory
Peer reviewed Peer reviewed
Mosenthal, Peter B. – American Educational Research Journal, 1998
The extent to which variables from a previous study (P. Mosenthal, 1996) on document processing influenced difficulty on 165 tasks from the pose scales of five national adult literacy scales was studied. Three process variables accounted for 78% of the variance when prose task difficulty was defined using level scores. Implications for computer…
Descriptors: Adaptive Testing, Adults, Computer Assisted Testing, Definitions
PDF pending restoration PDF pending restoration
Wheeler, Patricia H. – 1995
When individuals are given tests that are too hard or too easy, the resulting scores are likely to be poor estimates of their performance. To get valid and accurate test scores that provide meaningful results, one should use functional-level testing (FLT). FLT is the practice of administering to an individual a version of a test with a difficulty…
Descriptors: Adaptive Testing, Difficulty Level, Educational Assessment, Performance
Linacre, John Michael – 1988
Computer-adaptive testing (CAT) allows improved security, greater scoring accuracy, shorter testing periods, quicker availability of results, and reduced guessing and other undesirable test behavior. Simple approaches can be applied by the classroom teacher, or other content specialist, who possesses simple computer equipment and elementary…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Cutting Scores
Reese, Lynda M.; Schnipke, Deborah L. – 1999
A two-stage design provides a way of roughly adapting item difficulty to test-taker ability. All test takers take a parallel stage-one test, and based on their scores, they are routed to tests of different difficulty levels in the second stage. This design provides some of the benefits of standard computer adaptive testing (CAT), such as increased…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Difficulty Level
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8