Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 7 |
Since 2006 (last 20 years) | 12 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
Researchers | 1 |
Location
Florida | 2 |
California | 1 |
Chile (Santiago) | 1 |
Denmark | 1 |
Idaho | 1 |
Nevada | 1 |
New York | 1 |
North Carolina | 1 |
Texas | 1 |
Utah | 1 |
Virginia | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Stanford Achievement Tests | 2 |
Advanced Placement… | 1 |
Dynamic Indicators of Basic… | 1 |
Iowa Tests of Basic Skills | 1 |
Measures of Academic Progress | 1 |
Peabody Picture Vocabulary… | 1 |
What Works Clearinghouse Rating
Maddox, Bryan – OECD Publishing, 2023
The digital transition in educational testing has introduced many new opportunities for technology to enhance large-scale assessments. These include the potential to collect and use log data on test-taker response processes routinely, and on a large scale. Process data has long been recognised as a valuable source of validation evidence in…
Descriptors: Measurement, Inferences, Test Reliability, Computer Assisted Testing
Ponce, Héctor R.; Mayer, Richard E.; Loyola, María Soledad – Journal of Educational Computing Research, 2021
One of the most common technology-enhanced items used in large-scale K-12 testing programs is the drag-and-drop response interaction. The main research questions in this study are: (a) Does adding a drag-and-drop interface to an online test affect the accuracy of student performance? (b) Does adding a drag-and-drop interface to an online test…
Descriptors: Computer Assisted Testing, Test Construction, Standardized Tests, Elementary School Students
Bryant, William – Practical Assessment, Research & Evaluation, 2017
As large-scale standardized tests move from paper-based to computer-based delivery, opportunities arise for test developers to make use of items beyond traditional selected and constructed response types. Technology-enhanced items (TEIs) have the potential to provide advantages over conventional items, including broadening construct measurement,…
Descriptors: Standardized Tests, Test Items, Computer Assisted Testing, Test Format
Gareis, Christopher R.; McMillan, James H.; Smucker, Amelie; Huang, Ke – Online Submission, 2021
The purpose of this study was to gauge the degree to which selected NWEA MAP Growth assessments are aligned to the Virginia Standards of Learning (SOL) and the extent to which MAP Growth reports can be used by school divisions to gauge student achievement relative to grade level and to identify learning gaps. The study was delimited to four MAP…
Descriptors: Achievement Tests, Academic Standards, State Standards, Alignment (Education)
Susanti, Yuni; Tokunaga, Takenobu; Nishikawa, Hitoshi; Obari, Hiroyuki – Research and Practice in Technology Enhanced Learning, 2017
The present study investigates the best factor for controlling the item difficulty of multiple-choice English vocabulary questions generated by an automatic question generation system. Three factors are considered for controlling item difficulty: (1) reading passage difficulty, (2) semantic similarity between the correct answer and distractors,…
Descriptors: Test Items, Difficulty Level, Computer Assisted Testing, Vocabulary Development
Carlson, Sarah E.; Seipel, Ben; Biancarosa, Gina; Davison, Mark L.; Clinton, Virginia – Grantee Submission, 2019
This demonstration introduces and presents an innovative online cognitive diagnostic assessment, developed to identify the types of cognitive processes that readers use during comprehension; specifically, processes that distinguish between subtypes of struggling comprehenders. Cognitive diagnostic assessments are designed to provide valuable…
Descriptors: Reading Comprehension, Standardized Tests, Diagnostic Tests, Computer Assisted Testing
Shamir, Haya – Journal of Educational Multimedia and Hypermedia, 2018
Assessing students' emerging literacy skills is crucial for identifying areas where a child may be falling behind and can lead directly to an increased chance of reading success. The Waterford Assessment of Core Skills (WACS), a computerized adaptive test of early literacy for students in prekindergarten through 2nd grade, addresses this need.…
Descriptors: Computer Assisted Testing, Adaptive Testing, Reading Tests, Preschool Children
Foorman, Barbara R.; Petscher, Yaacov; Schatschneider, Chris – Florida Center for Reading Research, 2015
The FAIR-FS consists of computer-adaptive reading comprehension and oral language screening tasks that provide measures to track growth over time, as well as a Probability of Literacy Success (PLS) linked to grade-level performance (i.e., the 40th percentile) on the reading comprehension subtest of the Stanford Achievement Test (SAT-10) in the…
Descriptors: Reading Instruction, Screening Tests, Reading Comprehension, Oral Language
Cawthon, Stephanie; Leppo, Rachel – American Annals of the Deaf, 2013
The authors conducted a qualitative meta-analysis of the research on assessment accommodations for students who are deaf or hard of hearing. There were 16 identified studies that analyzed the impact of factors related to student performance on academic assessments across different educational settings, content areas, and types of assessment…
Descriptors: Testing Accommodations, Academic Achievement, Deafness, Hearing Impairments
Colwell, Nicole Makas – Journal of Education and Training Studies, 2013
This paper highlights the current findings and issues regarding the role of computer-adaptive testing in test anxiety. The computer-adaptive test (CAT) proposed by one of the Common Core consortia brings these issues to the forefront. Research has long indicated that test anxiety impairs student performance. More recent research indicates that…
Descriptors: Test Anxiety, Computer Assisted Testing, Evaluation Methods, Standardized Tests
Cawthon, Stephanie; Leppo, Rachel – Grantee Submission, 2013
The authors conducted a qualitative meta-analysis of the research on assessment accommodations for students who are deaf or hard of hearing. There were 16 identified studies that analyzed the impact of factors related to student performance on academic assessments across different educational settings, content areas, and types of assessment…
Descriptors: Testing Accommodations, Academic Achievement, Deafness, Hearing Impairments
Wandall, Jakob – Journal of Applied Testing Technology, 2011
Testing and test results can be used in different ways. They can be used for regulation and control, but they can also be a pedagogic tool for assessment of student proficiency in order to target teaching, improve learning and facilitate local pedagogical leadership. To serve these purposes the test has to be used for low stakes purposes, and to…
Descriptors: Test Results, Standardized Tests, Information Technology, Foreign Countries

Wainer, Howard – Journal of Educational and Behavioral Statistics, 2000
Suggests that because of the nonlinear relationship between item usage and item security, the problems of test security posed by continuous administration of standardized tests cannot be resolved merely by increasing the size of the item pool. Offers alternative strategies to overcome these problems, distributing test items so as to avoid the…
Descriptors: Computer Assisted Testing, Standardized Tests, Test Items, Testing Problems
Making Use of Response Times in Standardized Tests: Are Accuracy and Speed Measuring the Same Thing?
Scrams, David J.; Schnipke, Deborah L. – 1997
Response accuracy and response speed provide separate measures of performance. Psychometricians have tended to focus on accuracy with the goal of characterizing examinees on the basis of their ability to respond correctly to items from a given content domain. With the advent of computerized testing, response times can now be recorded unobtrusively…
Descriptors: Computer Assisted Testing, Difficulty Level, Item Response Theory, Psychometrics
Burstein, Jill C.; Kaplan, Randy M. – 1995
There is a considerable interest at Educational Testing Service (ETS) to include performance-based, natural language constructed-response items on standardized tests. Such items can be developed, but the projected time and costs required to have these items scored by human graders would be prohibitive. In order for ETS to include these types of…
Descriptors: Computer Assisted Testing, Constructed Response, Cost Effectiveness, Hypothesis Testing
Previous Page | Next Page »
Pages: 1 | 2