Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 9 |
Since 2006 (last 20 years) | 23 |
Descriptor
Adaptive Testing | 42 |
Difficulty Level | 42 |
Item Response Theory | 42 |
Computer Assisted Testing | 38 |
Test Items | 33 |
Item Banks | 15 |
Test Construction | 10 |
Comparative Analysis | 8 |
Estimation (Mathematics) | 8 |
Simulation | 7 |
Ability | 6 |
More ▼ |
Source
Author
De Ayala, R. J. | 2 |
He, Wei | 2 |
Kim, Sooyeon | 2 |
Moses, Tim | 2 |
Stocking, Martha L. | 2 |
Wise, Steven L. | 2 |
Al-A'ali, Mansoor | 1 |
Andrich, David | 1 |
Ariel, Adelaide | 1 |
Berger, Martijn P. F. | 1 |
Bergstrom, Betty A. | 1 |
More ▼ |
Publication Type
Journal Articles | 27 |
Reports - Research | 23 |
Reports - Evaluative | 13 |
Speeches/Meeting Papers | 9 |
Reports - Descriptive | 3 |
Dissertations/Theses -… | 1 |
ERIC Digests in Full Text | 1 |
ERIC Publications | 1 |
Information Analyses | 1 |
Education Level
Elementary Secondary Education | 5 |
Elementary Education | 2 |
High Schools | 2 |
Higher Education | 2 |
Secondary Education | 2 |
Early Childhood Education | 1 |
Grade 1 | 1 |
Grade 11 | 1 |
Grade 12 | 1 |
Grade 2 | 1 |
Kindergarten | 1 |
More ▼ |
Audience
Location
Australia | 1 |
California | 1 |
Florida | 1 |
Idaho | 1 |
Indonesia | 1 |
Nevada | 1 |
New York | 1 |
North Carolina | 1 |
Texas | 1 |
Turkey | 1 |
Utah | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Wyse, Adam E.; McBride, James R. – Journal of Educational Measurement, 2021
A key consideration when giving any computerized adaptive test (CAT) is how much adaptation is present when the test is used in practice. This study introduces a new framework to measure the amount of adaptation of Rasch-based CATs based on looking at the differences between the selected item locations (Rasch item difficulty parameters) of the…
Descriptors: Item Response Theory, Computer Assisted Testing, Adaptive Testing, Test Items
The Effect of Item Pools of Different Strengths on the Test Results of Computerized-Adaptive Testing
Kezer, Fatih – International Journal of Assessment Tools in Education, 2021
Item response theory provides various important advantages for exams carried out or to be carried out digitally. For computerized adaptive tests to be able to make valid and reliable predictions supported by IRT, good quality item pools should be used. This study examines how adaptive test applications vary in item pools which consist of items…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Item Response Theory
Yu, Albert; Douglas, Jeffrey A. – Journal of Educational and Behavioral Statistics, 2023
We propose a new item response theory growth model with item-specific learning parameters, or ISLP, and two variations of this model. In the ISLP model, either items or blocks of items have their own learning parameters. This model may be used to improve the efficiency of learning in a formative assessment. We show ways that the ISLP model's…
Descriptors: Item Response Theory, Learning, Markov Processes, Monte Carlo Methods
Sahin, Melek Gulsah – International Journal of Assessment Tools in Education, 2020
Computer Adaptive Multistage Testing (ca-MST), which take the advantage of computer technology and adaptive test form, are widely used, and are now a popular issue of assessment and evaluation. This study aims at analyzing the effect of different panel designs, module lengths, and different sequence of a parameter value across stages and change in…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Item Response Theory
He, Wei – NWEA, 2022
To ensure that student academic growth in a subject area is accurately captured, it is imperative that the underlying scale remains stable over time. As item parameter stability constitutes one of the factors that affects scale stability, NWEA® periodically conducts studies to check for the stability of the item parameter estimates for MAP®…
Descriptors: Achievement Tests, Test Items, Test Reliability, Academic Achievement
Istiyono, Edi; Dwandaru, Wipsar Sunu Brams; Lede, Yulita Adelfin; Rahayu, Farida; Nadapdap, Amipa – International Journal of Instruction, 2019
The objective of this study was to develop Physics critical thinking skill test using computerized adaptive test (CAT) based on item response theory (IRT). This research was a development research using 4-D (define, design, develop, and disseminate). The content validity of the items was proven using Aiken's V. The test trial involved 252 students…
Descriptors: Critical Thinking, Thinking Skills, Cognitive Tests, Physics
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook – Journal of Educational Measurement, 2015
This inquiry is an investigation of item response theory (IRT) proficiency estimators' accuracy under multistage testing (MST). We chose a two-stage MST design that includes four modules (one at Stage 1, three at Stage 2) and three difficulty paths (low, middle, high). We assembled various two-stage MST panels (i.e., forms) by manipulating two…
Descriptors: Comparative Analysis, Item Response Theory, Computation, Accuracy
Shamir, Haya – Journal of Educational Multimedia and Hypermedia, 2018
Assessing students' emerging literacy skills is crucial for identifying areas where a child may be falling behind and can lead directly to an increased chance of reading success. The Waterford Assessment of Core Skills (WACS), a computerized adaptive test of early literacy for students in prekindergarten through 2nd grade, addresses this need.…
Descriptors: Computer Assisted Testing, Adaptive Testing, Reading Tests, Preschool Children
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook Henry – ETS Research Report Series, 2015
The purpose of this inquiry was to investigate the effectiveness of item response theory (IRT) proficiency estimators in terms of estimation bias and error under multistage testing (MST). We chose a 2-stage MST design in which 1 adaptation to the examinees' ability levels takes place. It includes 4 modules (1 at Stage 1, 3 at Stage 2) and 3 paths…
Descriptors: Item Response Theory, Computation, Statistical Bias, Error of Measurement
Zhang, Jinming; Li, Jie – Journal of Educational Measurement, 2016
An IRT-based sequential procedure is developed to monitor items for enhancing test security. The procedure uses a series of statistical hypothesis tests to examine whether the statistical characteristics of each item under inspection have changed significantly during CAT administration. This procedure is compared with a previously developed…
Descriptors: Computer Assisted Testing, Test Items, Difficulty Level, Item Response Theory
Coster, Wendy J.; Kramer, Jessica M.; Tian, Feng; Dooley, Meghan; Liljenquist, Kendra; Kao, Ying-Chia; Ni, Pengsheng – Autism: The International Journal of Research and Practice, 2016
The Pediatric Evaluation of Disability Inventory-Computer Adaptive Test is an alternative method for describing the adaptive function of children and youth with disabilities using a computer-administered assessment. This study evaluated the performance of the Pediatric Evaluation of Disability Inventory-Computer Adaptive Test with a national…
Descriptors: Autism, Pervasive Developmental Disorders, Computer Assisted Testing, Adaptive Testing
Özyurt, Hacer; Özyurt, Özcan – Eurasian Journal of Educational Research, 2015
Problem Statement: Learning-teaching activities bring along the need to determine whether they achieve their goals. Thus, multiple choice tests addressing the same set of questions to all are frequently used. However, this traditional assessment and evaluation form contrasts with modern education, where individual learning characteristics are…
Descriptors: Probability, Adaptive Testing, Computer Assisted Testing, Item Response Theory
He, Wei; Reckase, Mark D. – Educational and Psychological Measurement, 2014
For computerized adaptive tests (CATs) to work well, they must have an item pool with sufficient numbers of good quality items. Many researchers have pointed out that, in developing item pools for CATs, not only is the item pool size important but also the distribution of item parameters and practical considerations such as content distribution…
Descriptors: Item Banks, Test Length, Computer Assisted Testing, Adaptive Testing
Weng, Ting-Sheng – Journal of Educational Technology Systems, 2012
This research applies multimedia technology to design a dynamic item generation method that can adaptively adjust the difficulty level of items according to the level of the testee. The method is based on interactive testing software developed by Flash Actionscript, and provides a testing solution for users by automatically distributing items of…
Descriptors: Feedback (Response), Difficulty Level, Educational Technology, Educational Games
Song, Tian – ProQuest LLC, 2010
This study investigates the effect of fitting a unidimensional IRT model to multidimensional data in content-balanced computerized adaptive testing (CAT). Unconstrained CAT with the maximum information item selection method is chosen as the baseline, and the performances of three content balancing procedures, the constrained CAT (CCAT), the…
Descriptors: Adaptive Testing, Difficulty Level, Item Analysis, Item Response Theory