Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 6 |
Since 2016 (last 10 years) | 12 |
Since 2006 (last 20 years) | 23 |
Descriptor
Computer Assisted Testing | 53 |
Elementary Secondary Education | 53 |
Test Items | 53 |
Test Construction | 27 |
Adaptive Testing | 16 |
Item Banks | 13 |
Achievement Tests | 12 |
Computer Software | 12 |
Mathematics Tests | 11 |
Microcomputers | 11 |
Test Format | 11 |
More ▼ |
Source
Author
Brown, Frank N. | 2 |
Cawthon, Stephanie | 2 |
He, Wei | 2 |
Leppo, Rachel | 2 |
Al-A'ali, Mansoor | 1 |
Alhadi, Moosa A. A. | 1 |
Allen, Nancy L. | 1 |
Arter, Judith A. | 1 |
Cabello, Beverly | 1 |
Calder, Peter W. | 1 |
Chien, Yuehmei | 1 |
More ▼ |
Publication Type
Education Level
Audience
Practitioners | 6 |
Researchers | 3 |
Teachers | 3 |
Administrators | 1 |
Location
Wisconsin | 2 |
Canada | 1 |
Czech Republic | 1 |
Florida | 1 |
West Virginia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Wise, Steven L.; Soland, James; Dupray, Laurence M. – Journal of Applied Testing Technology, 2021
Technology-Enhanced Items (TEIs) have been purported to be more motivating and engaging to test takers than traditional multiple-choice items. The claim of enhanced engagement, however, has thus far received limited research attention. This study examined the rates of rapid-guessing behavior received by three types of items (multiple-choice,…
Descriptors: Test Items, Guessing (Tests), Multiple Choice Tests, Achievement Tests
Kosh, Audra E. – Journal of Applied Testing Technology, 2021
In recent years, Automatic Item Generation (AIG) has increasingly shifted from theoretical research to operational implementation, a shift raising some unforeseen practical challenges. Specifically, generating high-quality answer choices presents several challenges such as ensuring that answer choices blend in nicely together for all possible item…
Descriptors: Test Items, Multiple Choice Tests, Decision Making, Test Construction
He, Wei – NWEA, 2022
To ensure that student academic growth in a subject area is accurately captured, it is imperative that the underlying scale remains stable over time. As item parameter stability constitutes one of the factors that affects scale stability, NWEA® periodically conducts studies to check for the stability of the item parameter estimates for MAP®…
Descriptors: Achievement Tests, Test Items, Test Reliability, Academic Achievement
Russell, Michael; Moncaleano, Sebastian – Educational Assessment, 2019
Over the past decade, large-scale testing programs have employed technology-enhanced items (TEI) to improve the fidelity with which an item measures a targeted construct. This paper presents findings from a review of released TEIs employed by large-scale testing programs worldwide. Analyses examine the prevalence with which different types of TEIs…
Descriptors: Computer Assisted Testing, Fidelity, Elementary Secondary Education, Test Items
Alhadi, Moosa A. A.; Zhang, Dake; Wang, Ting; Maher, Carolyn A. – North American Chapter of the International Group for the Psychology of Mathematics Education, 2022
This research synthesizes studies that used a Digitalized Interactive Component (DIC) to assess K-12 student mathematics performance during Computer-based-Assessments (CBAs) in mathematics. A systematic search identified ten studies that categorized existing DICs according to the tools that provided language assistance to students and tools that…
Descriptors: Computer Assisted Testing, Mathematics Tests, English Language Learners, Geometry
National Academies Press, 2022
The National Assessment of Educational Progress (NAEP) -- often called "The Nation's Report Card" -- is the largest nationally representative and continuing assessment of what students in public and private schools in the United States know and can do in various subjects and has provided policy makers and the public with invaluable…
Descriptors: Costs, Futures (of Society), National Competency Tests, Educational Trends
Fishbein, Bethany; Martin, Michael O.; Mullis, Ina V. S.; Foy, Pierre – Large-scale Assessments in Education, 2018
Background: TIMSS 2019 is the first assessment in the TIMSS transition to a computer-based assessment system, called eTIMSS. The TIMSS 2019 Item Equivalence Study was conducted in advance of the field test in 2017 to examine the potential for mode effects on the psychometric behavior of the TIMSS mathematics and science trend items induced by the…
Descriptors: Mathematics Achievement, Science Achievement, Mathematics Tests, Elementary Secondary Education
Sun, Bo; Zhu, Yunzong; Xiao, Yongkang; Xiao, Rong; Wei, Yungang – IEEE Transactions on Learning Technologies, 2019
In recent years, computerized adaptive testing (CAT) has gained popularity as an important means to evaluate students' ability. Assigning tags to test questions is crucial in CAT. Manual tagging is widely used for constructing question banks; however, this approach is time-consuming and might lead to consistency issues. Automatic question tagging,…
Descriptors: Computer Assisted Testing, Student Evaluation, Test Items, Multiple Choice Tests
Mullis, Ina V. S., Ed.; Martin, Michael O., Ed.; von Davier, Matthias, Ed. – International Association for the Evaluation of Educational Achievement, 2021
TIMSS (Trends in International Mathematics and Science Study) is a long-standing international assessment of mathematics and science at the fourth and eighth grades that has been collecting trend data every four years since 1995. About 70 countries use TIMSS trend data for monitoring the effectiveness of their education systems in a global…
Descriptors: Achievement Tests, International Assessment, Science Achievement, Mathematics Achievement
Smarter Balanced Assessment Consortium, 2016
The goal of this study was to gather comprehensive evidence about the alignment of the Smarter Balanced summative assessments to the Common Core State Standards (CCSS). Alignment of the Smarter Balanced summative assessments to the CCSS is a critical piece of evidence regarding the validity of inferences students, teachers and policy makers can…
Descriptors: Alignment (Education), Summative Evaluation, Common Core State Standards, Test Content
Wagemaker, Hans, Ed. – International Association for the Evaluation of Educational Achievement, 2020
Although International Association for the Evaluation of Educational Achievement-pioneered international large-scale assessment (ILSA) of education is now a well-established science, non-practitioners and many users often substantially misunderstand how large-scale assessments are conducted, what questions and challenges they are designed to…
Descriptors: International Assessment, Achievement Tests, Educational Assessment, Comparative Analysis
Pelánek, Radek; Rihák, Ji?rí – International Educational Data Mining Society, 2016
In online educational systems we can easily collect and analyze extensive data about student learning. Current practice, however, focuses only on some aspects of these data, particularly on correctness of students answers. When a student answers incorrectly, the submitted wrong answer can give us valuable information. We provide an overview of…
Descriptors: Foreign Countries, Online Systems, Geography, Anatomy
Hauser, Carl; Thum, Yeow Meng; He, Wei; Ma, Lingling – Educational and Psychological Measurement, 2015
When conducting item reviews, analysts evaluate an array of statistical and graphical information to assess the fit of a field test (FT) item to an item response theory model. The process can be tedious, particularly when the number of human reviews (HR) to be completed is large. Furthermore, such a process leads to decisions that are susceptible…
Descriptors: Test Items, Item Response Theory, Research Methodology, Decision Making
Foorman, Barbara R.; Petscher, Yaacov; Schatschneider, Chris – Florida Center for Reading Research, 2015
The FAIR-FS consists of computer-adaptive reading comprehension and oral language screening tasks that provide measures to track growth over time, as well as a Probability of Literacy Success (PLS) linked to grade-level performance (i.e., the 40th percentile) on the reading comprehension subtest of the Stanford Achievement Test (SAT-10) in the…
Descriptors: Reading Instruction, Screening Tests, Reading Comprehension, Oral Language
Doorey, Nancy – Smarter Balanced Assessment Consortium, 2014
Between March and June of 2014, the Smarter Balanced Assessment Consortium conducted a field test of its new online assessment system. Thirteen participating states provided the results of surveys given to students and adults involved in the Field Test. Overall, more than 70% of test coordinators in each of seven states indicated that the Field…
Descriptors: Field Tests, Computer Assisted Testing, Student Surveys, Surveys