Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 13 |
Since 2006 (last 20 years) | 17 |
Descriptor
Computer Assisted Testing | 19 |
Reading Tests | 19 |
Scoring | 19 |
Reading Comprehension | 8 |
Scores | 7 |
Reading Fluency | 6 |
Test Items | 6 |
Elementary School Students | 5 |
Achievement Tests | 4 |
Grade 4 | 4 |
Language Tests | 4 |
More ▼ |
Source
Author
Alonzo, Julie | 3 |
Kamata, Akihito | 3 |
Nese, Joseph F. T. | 3 |
Kahn, Josh | 2 |
Bailey, Kathleen M., Ed. | 1 |
Ben Seipel | 1 |
Bennett, Randy Elliot | 1 |
Biancarosa, Gina | 1 |
Bradley J. Ungurait | 1 |
Carlson, Sarah E. | 1 |
Carr, Nathan T. | 1 |
More ▼ |
Publication Type
Education Level
Audience
Policymakers | 1 |
Laws, Policies, & Programs
Elementary and Secondary… | 1 |
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Bradley J. Ungurait – ProQuest LLC, 2021
Advancements in technology and computer-based testing has allowed for greater flexibility in assessing examinee knowledge on large-scale, high-stakes assessments. Through computer-based delivery, cognitive ability and skills can be effectively assessed cost-efficiently and measure domains that are difficult or even impossible to measure with…
Descriptors: Computer Assisted Testing, Evaluation Methods, Scoring, Student Evaluation
Magliano, Joseph P.; Lampi, Jodi P.; Ray, Melissa; Chan, Greta – Grantee Submission, 2020
Coherent mental models for successful comprehension require inferences that establish semantic "bridges" between discourse constituents and "elaborations" that incorporate relevant background knowledge. While it is established that individual differences in the extent to which postsecondary students engage in these processes…
Descriptors: Reading Comprehension, Reading Strategies, Inferences, Reading Tests
Ben Seipel; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison; Patrick C. Kennedy – Grantee Submission, 2022
Originally designed for students in Grades 3 through 5, MOCCA (formerly the Multiple-choice Online Causal Comprehension Assessment), identifies students who struggle with comprehension, and helps uncover why they struggle. There are many reasons why students might not comprehend what they read. They may struggle with decoding, or reading words…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Diagnostic Tests, Reading Tests
Toroujeni, Seyyed Morteza Hashemi – Education and Information Technologies, 2022
Score interchangeability of Computerized Fixed-Length Linear Testing (henceforth CFLT) and Paper-and-Pencil-Based Testing (henceforth PPBT) has become a controversial issue over the last decade when technology has meaningfully restructured methods of the educational assessment. Given this controversy, various testing guidelines published on…
Descriptors: Computer Assisted Testing, Reading Tests, Reading Comprehension, Scoring
Herget, Debbie; Dalton, Ben; Kinney, Saki; Smith, W. Zachary; Wilson, David; Rogers, Jim – National Center for Education Statistics, 2019
The Progress in International Reading Literacy Study (PIRLS) is an international comparative study of student performance in reading literacy at the fourth grade. PIRLS 2016 marks the fourth iteration of the study, which has been conducted every 5 years since 2001. New to the PIRLS assessment in 2016, ePIRLS provides a computer-based extension to…
Descriptors: Achievement Tests, Grade 4, Reading Achievement, Foreign Countries
Lottridge, Susan; Wood, Scott; Shaw, Dan – Applied Measurement in Education, 2018
This study sought to provide a framework for evaluating machine score-ability of items using a new score-ability rating scale, and to determine the extent to which ratings were predictive of observed automated scoring performance. The study listed and described a set of factors that are thought to influence machine score-ability; these factors…
Descriptors: Program Effectiveness, Computer Assisted Testing, Test Scoring Machines, Scoring
Yamamoto, Kentaro; He, Qiwei; Shin, Hyo Jeong; von Davier, Mattias – ETS Research Report Series, 2017
Approximately a third of the Programme for International Student Assessment (PISA) items in the core domains (math, reading, and science) are constructed-response items and require human coding (scoring). This process is time-consuming, expensive, and prone to error as often (a) humans code inconsistently, and (b) coding reliability in…
Descriptors: Foreign Countries, Achievement Tests, International Assessment, Secondary School Students
Nese, Joseph F. T.; Alonzo, Julie; Kamata, Akihito – Grantee Submission, 2016
The purpose of this study was to compare traditional oral reading fluency (ORF) measures to a computerized oral reading evaluation (CORE) system that uses speech recognition software. We applied a mixed model approach with two within-subject variables to test the mean WCPM score differences and the error rates between: passage length (25, 50, 85,…
Descriptors: Text Structure, Oral Reading, Reading Fluency, Reading Tests
Carlson, Sarah E.; Seipel, Ben; Biancarosa, Gina; Davison, Mark L.; Clinton, Virginia – Grantee Submission, 2019
This demonstration introduces and presents an innovative online cognitive diagnostic assessment, developed to identify the types of cognitive processes that readers use during comprehension; specifically, processes that distinguish between subtypes of struggling comprehenders. Cognitive diagnostic assessments are designed to provide valuable…
Descriptors: Reading Comprehension, Standardized Tests, Diagnostic Tests, Computer Assisted Testing
Nese, Joseph F. T.; Kamata, Akihito; Alonzo, Julie – Grantee Submission, 2015
Assessing reading fluency is critical because it functions as an indicator of comprehension and overall reading achievement. Although theory and research demonstrate the importance of ORF proficiency, traditional ORF assessment practices are lacking as sensitive measures of progress for educators to make instructional decisions. The purpose of…
Descriptors: Oral Reading, Reading Fluency, Accuracy, Reading Rate
Nese, Joseph F. T.; Kahn, Josh; Kamata, Akihito – Grantee Submission, 2017
Despite prevalent use and practical application, the current and standard assessment of oral reading fluency (ORF) presents considerable limitations which reduces its validity in estimating growth and monitoring student progress, including: (a) high cost of implementation; (b) tenuous passage equivalence; and (c) bias, large standard error, and…
Descriptors: Automation, Speech, Recognition (Psychology), Scores
Kahn, Josh; Nese, Joseph T.; Alonzo, Julie – Behavioral Research and Teaching, 2016
There is strong theoretical support for oral reading fluency (ORF) as an essential building block of reading proficiency. The current and standard ORF assessment procedure requires that students read aloud a grade-level passage (˜ 250 words) in a one-to-one administration, with the number of words read correctly in 60 seconds constituting their…
Descriptors: Teacher Surveys, Oral Reading, Reading Tests, Computer Assisted Testing
Wagemaker, Hans, Ed. – International Association for the Evaluation of Educational Achievement, 2020
Although International Association for the Evaluation of Educational Achievement-pioneered international large-scale assessment (ILSA) of education is now a well-established science, non-practitioners and many users often substantially misunderstand how large-scale assessments are conducted, what questions and challenges they are designed to…
Descriptors: International Assessment, Achievement Tests, Educational Assessment, Comparative Analysis
Greathouse, Dan; Shaughnessy, Michael F. – Journal of Psychoeducational Assessment, 2016
Whenever a major intelligence or achievement test is revised, there is always renewed interest in the underlying structure of the test as well as a renewed interest in the scoring, administration, and interpretation changes. In this interview, Amy Gabel discusses the most recent revision of the "Wechsler Intelligence Scale for Children-Fifth…
Descriptors: Children, Intelligence Tests, Test Use, Test Validity
Carr, Nathan T.; Xi, Xiaoming – Language Assessment Quarterly, 2010
This article examines how the use of automated scoring procedures for short-answer reading tasks can affect the constructs being assessed. In particular, it highlights ways in which the development of scoring algorithms intended to apply the criteria used by human raters can lead test developers to reexamine and even refine the constructs they…
Descriptors: Scoring, Automation, Reading Tests, Test Format
Previous Page | Next Page »
Pages: 1 | 2