Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 8 |
Since 2006 (last 20 years) | 69 |
Descriptor
Source
Author
Alonzo, Julie | 12 |
Tindal, Gerald | 12 |
Lai, Cheng Fei | 7 |
Jiao, Hong | 5 |
Irvin, P. Shawn | 4 |
Lai, Cheng-Fei | 4 |
Park, Bitnara Jasmine | 4 |
Wang, Shudong | 4 |
He, Wei | 3 |
Thum, Yeow Meng | 2 |
Abdel-Aal, Radwan E. | 1 |
More ▼ |
Publication Type
Education Level
Elementary Secondary Education | 69 |
Elementary Education | 20 |
Grade 6 | 7 |
Secondary Education | 6 |
Grade 3 | 5 |
Grade 4 | 5 |
Grade 5 | 5 |
High Schools | 5 |
Higher Education | 5 |
Grade 8 | 4 |
Postsecondary Education | 4 |
More ▼ |
Audience
Location
Oregon | 7 |
Taiwan | 3 |
Australia | 2 |
China | 1 |
Cyprus | 1 |
Florida | 1 |
Hong Kong | 1 |
Ireland | 1 |
North Carolina | 1 |
Singapore | 1 |
United Kingdom | 1 |
More ▼ |
Laws, Policies, & Programs
Individuals with Disabilities… | 7 |
Assessments and Surveys
What Works Clearinghouse Rating
Wyse, Adam E.; McBride, James R. – Measurement: Interdisciplinary Research and Perspectives, 2022
A common practical challenge is how to assign ability estimates to all incorrect and all correct response patterns when using item response theory (IRT) models and maximum likelihood estimation (MLE) since ability estimates for these types of responses equal -8 or +8. This article uses a simulation study and data from an operational K-12…
Descriptors: Scores, Adaptive Testing, Computer Assisted Testing, Test Length
He, Wei – NWEA, 2022
To ensure that student academic growth in a subject area is accurately captured, it is imperative that the underlying scale remains stable over time. As item parameter stability constitutes one of the factors that affects scale stability, NWEA® periodically conducts studies to check for the stability of the item parameter estimates for MAP®…
Descriptors: Achievement Tests, Test Items, Test Reliability, Academic Achievement
Giada Spaccapanico Proietti; Mariagiulia Matteucci; Stefania Mignani; Bernard P. Veldkamp – Journal of Educational and Behavioral Statistics, 2024
Classical automated test assembly (ATA) methods assume fixed and known coefficients for the constraints and the objective function. This hypothesis is not true for the estimates of item response theory parameters, which are crucial elements in test assembly classical models. To account for uncertainty in ATA, we propose a chance-constrained…
Descriptors: Automation, Computer Assisted Testing, Ambiguity (Context), Item Response Theory
Russell, Michael; Moncaleano, Sebastian – Educational Assessment, 2019
Over the past decade, large-scale testing programs have employed technology-enhanced items (TEI) to improve the fidelity with which an item measures a targeted construct. This paper presents findings from a review of released TEIs employed by large-scale testing programs worldwide. Analyses examine the prevalence with which different types of TEIs…
Descriptors: Computer Assisted Testing, Fidelity, Elementary Secondary Education, Test Items
Arneson, Amy – ProQuest LLC, 2019
This three-paper dissertation explores item cluster-based assessments, first in general as it relates to modeling, and then, specific issues surrounding a particular item cluster-based assessment designed. There should be a reasonable analogy between the structure of a psychometric model and the cognitive theory that the assessment is based upon.…
Descriptors: Item Response Theory, Test Items, Critical Thinking, Cognitive Tests
Bukhari, Nurliyana – ProQuest LLC, 2017
In general, newer educational assessments are deemed more demanding challenges than students are currently prepared to face. Two types of factors may contribute to the test scores: (1) factors or dimensions that are of primary interest to the construct or test domain; and, (2) factors or dimensions that are irrelevant to the construct, causing…
Descriptors: Item Response Theory, Models, Psychometrics, Computer Simulation
Foorman, Barbara R.; Petscher, Yaacov; Schatschneider, Chris – Florida Center for Reading Research, 2015
The FAIR-FS consists of computer-adaptive reading comprehension and oral language screening tasks that provide measures to track growth over time, as well as a Probability of Literacy Success (PLS) linked to grade-level performance (i.e., the 40th percentile) on the reading comprehension subtest of the Stanford Achievement Test (SAT-10) in the…
Descriptors: Reading Instruction, Screening Tests, Reading Comprehension, Oral Language
Wise, Steven L. – Measurement: Interdisciplinary Research and Perspectives, 2015
The growing presence of computer-based testing has brought with it the capability to routinely capture the time that test takers spend on individual test items. This, in turn, has led to an increased interest in potential applications of response time in measuring intellectual ability and achievement. Goldhammer (this issue) provides a very useful…
Descriptors: Reaction Time, Measurement, Computer Assisted Testing, Achievement Tests
Northwest Evaluation Association, 2013
While many educators expect the Common Core State Standards (CCSS) to be more rigorous than previous state standards, some wonder if the transition to CCSS and to a Common Core aligned MAP test will have an impact on their students' RIT scores or the NWEA norms. MAP assessments use a proprietary scale known as the RIT (Rasch unit) scale to measure…
Descriptors: Achievement Tests, Computer Assisted Testing, Adaptive Testing, Item Response Theory
Lee, Jaemu; Park, Sanghoon; Kim, Kwangho – Turkish Online Journal of Distance Education, 2012
Computer Adaptive Testing (CAT) has been highlighted as a promising assessment method to fulfill two testing purposes: estimating student academic ability and classifying student academic level. In this paper, assessment for we introduced the Web-based Adaptive Testing System (WATS) developed to support a cost effective assessment for classifying…
Descriptors: Academic Ability, Academic Support Services, Adaptive Testing, Computer Assisted Testing
Wang, Shudong; Jiao, Hong; He, Wei – Online Submission, 2011
The ability estimation procedure is one of the most important components in a computerized adaptive testing (CAT) system. Currently, all CATs that provide K-12 student scores are based on the item response theory (IRT) model(s); while such application directly violates the assumption of independent sample of a person in IRT models because ability…
Descriptors: Accuracy, Computation, Computer Assisted Testing, Adaptive Testing
Li, Ying; Jiao, Hong; Lissitz, Robert W. – Journal of Applied Testing Technology, 2012
This study investigated the application of multidimensional item response theory (IRT) models to validate test structure and dimensionality. Multiple content areas or domains within a single subject often exist in large-scale achievement tests. Such areas or domains may cause multidimensionality or local item dependence, which both violate the…
Descriptors: Achievement Tests, Science Tests, Item Response Theory, Measures (Individuals)
Hansen, Mark; Cai, Li; Monroe, Scott; Li, Zhen – Grantee Submission, 2016
Despite the growing popularity of diagnostic classification models (e.g., Rupp, Templin, & Henson, 2010) in educational and psychological measurement, methods for testing their absolute goodness-of-fit to real data remain relatively underdeveloped. For tests of reasonable length and for realistic sample size, full-information test statistics…
Descriptors: Goodness of Fit, Item Response Theory, Classification, Maximum Likelihood Statistics
Weng, Ting-Sheng – Journal of Educational Technology Systems, 2012
This research applies multimedia technology to design a dynamic item generation method that can adaptively adjust the difficulty level of items according to the level of the testee. The method is based on interactive testing software developed by Flash Actionscript, and provides a testing solution for users by automatically distributing items of…
Descriptors: Feedback (Response), Difficulty Level, Educational Technology, Educational Games
Jacobsen, Jared; Ackermann, Richard; Eguez, Jane; Ganguli, Debalina; Rickard, Patricia; Taylor, Linda – Journal of Applied Testing Technology, 2011
A computer adaptive test (CAT) is a delivery methodology that serves the larger goals of the assessment system in which it is embedded. A thorough analysis of the assessment system for which a CAT is being designed is critical to ensure that the delivery platform is appropriate and addresses all relevant complexities. As such, a CAT engine must be…
Descriptors: Delivery Systems, Testing Programs, Computer Assisted Testing, Foreign Countries