Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 10 |
Since 2016 (last 10 years) | 25 |
Since 2006 (last 20 years) | 43 |
Descriptor
Computer Assisted Testing | 55 |
Item Response Theory | 55 |
Test Reliability | 55 |
Test Validity | 24 |
Test Items | 23 |
Adaptive Testing | 22 |
Foreign Countries | 14 |
Test Construction | 14 |
Scores | 11 |
Comparative Analysis | 10 |
Achievement Tests | 9 |
More ▼ |
Source
Author
Petscher, Yaacov | 4 |
Biancarosa, Gina | 2 |
Carlson, Sarah E. | 2 |
Davison, Mark L. | 2 |
Kuo, Bor-Chen | 2 |
Liu, Bowen | 2 |
Lunz, Mary E. | 2 |
Segall, Daniel O. | 2 |
Seipel, Ben | 2 |
Tock, Jamie | 2 |
Yao, Lihua | 2 |
More ▼ |
Publication Type
Education Level
Audience
Researchers | 1 |
Location
Indonesia | 3 |
California | 2 |
Florida | 2 |
Germany | 2 |
United Kingdom | 2 |
France | 1 |
Hong Kong | 1 |
Idaho | 1 |
Malaysia | 1 |
Maryland | 1 |
Nevada | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Che Lah, Noor Hidayah; Tasir, Zaidatun; Jumaat, Nurul Farhana – Educational Studies, 2023
The aim of the study was to evaluate the extended version of the Problem-Solving Inventory (PSI) via an online learning setting known as the Online Problem-Solving Inventory (OPSI) through the lens of Rasch Model analysis. To date, there is no extended version of the PSI for online settings even though many researchers have used it; thus, this…
Descriptors: Problem Solving, Measures (Individuals), Electronic Learning, Item Response Theory
Benton, Tom – Research Matters, 2021
Computer adaptive testing is intended to make assessment more reliable by tailoring the difficulty of the questions a student has to answer to their level of ability. Most commonly, this benefit is used to justify the length of tests being shortened whilst retaining the reliability of a longer, non-adaptive test. Improvements due to adaptive…
Descriptors: Risk, Item Response Theory, Computer Assisted Testing, Difficulty Level
Lai, Rina P. Y. – ACM Transactions on Computing Education, 2022
Computational Thinking (CT), entailing both domain-general and domain-specific skills, is a competency fundamental to computing education and beyond. However, as a cross-domain competency, appropriate assessment design and method remain equivocal. Indeed, the majority of the existing assessments have a predominant focus on measuring programming…
Descriptors: Computer Assisted Testing, Computation, Thinking Skills, Computer Science Education
Geoffrey Converse – ProQuest LLC, 2021
In educational measurement, Item Response Theory (IRT) provides a means of quantifying student knowledge. Specifically, IRT models the probability of a student answering a particular item correctly as a function of the student's continuous-valued latent abilities [theta] (e.g. add, subtract, multiply, divide) and parameters associated with the…
Descriptors: Item Response Theory, Test Validity, Student Evaluation, Computer Assisted Testing
Koch, Marco; Spinath, Frank M.; Greiff, Samuel; Becker, Nicolas – Journal of Intelligence, 2022
Figural matrices tasks are one of the most prominent item formats used in intelligence tests, and their relevance for the assessment of cognitive abilities is unquestionable. However, despite endeavors of the open science movement to make scientific research accessible on all levels, there is a lack of royalty-free figural matrices tests. The Open…
Descriptors: Intelligence, Intelligence Tests, Computer Assisted Testing, Test Items
He, Wei – NWEA, 2022
To ensure that student academic growth in a subject area is accurately captured, it is imperative that the underlying scale remains stable over time. As item parameter stability constitutes one of the factors that affects scale stability, NWEA® periodically conducts studies to check for the stability of the item parameter estimates for MAP®…
Descriptors: Achievement Tests, Test Items, Test Reliability, Academic Achievement
Albano, Anthony D.; McConnell, Scott R.; Lease, Erin M.; Cai, Liuhan – Grantee Submission, 2020
Research has shown that the context of practice tasks can have a significant impact on learning, with long-term retention and transfer improving when tasks of different types are mixed by interleaving (abcabcabc) compared with grouping together in blocks (aaabbbccc). This study examines the influence of context via interleaving from a psychometric…
Descriptors: Context Effect, Test Items, Preschool Children, Computer Assisted Testing
Choi, Youn-Jeng; Asilkalkan, Abdullah – Measurement: Interdisciplinary Research and Perspectives, 2019
About 45 R packages to analyze data using item response theory (IRT) have been developed over the last decade. This article introduces these 45 R packages with their descriptions and features. It also describes possible advanced IRT models using R packages, as well as dichotomous and polytomous IRT models, and R packages that contain applications…
Descriptors: Item Response Theory, Data Analysis, Computer Software, Test Bias
Chai, Jun Ho; Lo, Chang Huan; Mayor, Julien – Journal of Speech, Language, and Hearing Research, 2020
Purpose: This study introduces a framework to produce very short versions of the MacArthur-Bates Communicative Development Inventories (CDIs) by combining the Bayesian-inspired approach introduced by Mayor and Mani (2019) with an item response theory-based computerized adaptive testing that adapts to the ability of each child, in line with…
Descriptors: Bayesian Statistics, Item Response Theory, Measures (Individuals), Language Skills
Kaufman, Alan S. – Journal of Intelligence, 2021
U.S. Supreme Court justices and other federal judges are, effectively, appointed for life, with no built-in check on their cognitive functioning as they approach old age. There is about a century of research on aging and intelligence that shows the vulnerability of processing speed, fluid reasoning, visual-spatial processing, and working memory to…
Descriptors: Judges, Federal Government, Aging (Individuals), Decision Making
Goodwin, Amanda; Petscher, Yaacov; Tock, Jamie – Journal of Research in Reading, 2021
Background: Middle school students use the information conveyed by morphemes (i.e., units of meaning such as prefixes, root words and suffixes) in different ways to support their literacy endeavours, suggesting the likelihood that morphological knowledge is multidimensional. This has important implications for assessment. Methods: The current…
Descriptors: Middle School Students, Morphology (Languages), Metalinguistics, Student Evaluation
Goodwin, Amanda P.; Petscher, Yaacov; Tock, Jamie – Grantee Submission, 2021
Background: Middle school students use the information conveyed by morphemes (i.e., units of meaning such as prefixes, root words and suffixes) in different ways to support their literacy endeavours, suggesting the likelihood that morphological knowledge is multidimensional. This has important implications for assessment. Methods: The current…
Descriptors: Morphology (Languages), Morphemes, Middle School Students, Knowledge Level
Yoshioka, Sérgio R. I.; Ishitani, Lucila – Informatics in Education, 2018
Computerized Adaptive Testing (CAT) is now widely used. However, inserting new items into the question bank of a CAT requires a great effort that makes impractical the wide application of CAT in classroom teaching. One solution would be to use the tacit knowledge of the teachers or experts for a pre-classification and calibrate during the…
Descriptors: Student Motivation, Adaptive Testing, Computer Assisted Testing, Item Response Theory
Kosan, Aysen Melek Aytug; Koç, Nizamettin; Elhan, Atilla Halil; Öztuna, Derya – International Journal of Assessment Tools in Education, 2019
Progress Test (PT) is a form of assessment that simultaneously measures ability levels of all students in a certain educational program and their progress over time by providing them with same questions and repeating the process at regular intervals with parallel tests. Our objective was to generate an item bank for the PT and to examine the…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Medical Education
Conejo, Ricardo; Barros, Beatriz; Bertoa, Manuel F. – IEEE Transactions on Learning Technologies, 2019
This paper presents an innovative method to tackle the automatic evaluation of programming assignments with an approach based on well-founded assessment theories (Classical Test Theory (CTT) and Item Response Theory (IRT)) instead of heuristic assessment as in other systems. CTT and/or IRT are used to grade the results of different items of…
Descriptors: Computer Assisted Testing, Grading, Programming, Item Response Theory