Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 278 |
Descriptor
Reading Tests | 293 |
Statistical Analysis | 293 |
Comparative Analysis | 271 |
Reading | 265 |
Scores | 264 |
Reading Achievement | 260 |
Public Schools | 259 |
Racial Differences | 258 |
Ethnic Groups | 257 |
Gender Differences | 257 |
Elementary School Students | 253 |
More ▼ |
Source
Author
Alonzo, Julie | 13 |
Tindal, Gerald | 11 |
Bianchini, John C. | 9 |
Loret, Peter G. | 9 |
Lai, Cheng-Fei | 8 |
Park, Bitnara Jasmine | 6 |
Irvin, P. Shawn | 5 |
Anderson, Daniel | 4 |
Nese, Joseph F. T. | 3 |
Foorman, Barbara R. | 2 |
Jamgochian, Elisa | 2 |
More ▼ |
Publication Type
Numerical/Quantitative Data | 293 |
Reports - Evaluative | 271 |
Reports - Research | 9 |
Guides - Non-Classroom | 2 |
Guides - General | 1 |
Reports - Descriptive | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Audience
Administrators | 1 |
Practitioners | 1 |
Researchers | 1 |
Location
California | 11 |
Florida | 11 |
Texas | 9 |
Illinois | 8 |
Massachusetts | 8 |
North Carolina | 8 |
Ohio | 8 |
District of Columbia | 7 |
Georgia | 7 |
Kentucky | 7 |
Maryland | 7 |
More ▼ |
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Alonzo, Julie; Anderson, Daniel – Behavioral Research and Teaching, 2018
In response to a request for additional analyses, in particular reporting confidence intervals around the results, we re-analyzed the data from prior studies. This supplementary report presents the results of the additional analyses addressing classification accuracy, reliability, and criterion-related validity evidence. For ease of reference, we…
Descriptors: Curriculum Based Assessment, Computation, Statistical Analysis, Accuracy
Alonzo, Julie; Anderson, Daniel – Behavioral Research and Teaching, 2018
In response to a request for additional analyses, in particular reporting confidence intervals around the results, we re-analyzed the data from prior studies. This supplementary report presents the results of the additional analyses addressing classification accuracy, reliability, and criterion-related validity evidence. For ease of reference, we…
Descriptors: Curriculum Based Assessment, Computation, Statistical Analysis, Classification
Li, Dongmei; Yi, Qing; Harris, Deborah – ACT, Inc., 2017
In preparation for online administration of the ACT® test, ACT conducted studies to examine the comparability of scores between online and paper administrations, including a timing study in fall 2013, a mode comparability study in spring 2014, and a second mode comparability study in spring 2015. This report presents major findings from these…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Comparative Analysis, Test Format
Warner-Griffin, Catharine; Liu, Huili; Tadler, Chrystine; Herget, Debbie; Dalton, Ben – National Center for Education Statistics, 2017
The Progress in International Reading Literacy Study (PIRLS) is an international assessment of student performance in reading literacy at the fourth grade. PIRLS measures students in the fourth year of formal schooling because this is typically when students' learning transitions from a focus on "learning to read" to a focus on…
Descriptors: Foreign Countries, Achievement Tests, Grade 4, International Assessment
Powers, Sonya; Li, Dongmei; Suh, Hongwook; Harris, Deborah J. – ACT, Inc., 2016
ACT reporting categories and ACT Readiness Ranges are new features added to the ACT score reports starting in fall 2016. For each reporting category, the number correct score, the maximum points possible, the percent correct, and the ACT Readiness Range, along with an indicator of whether the reporting category score falls within the Readiness…
Descriptors: Scores, Classification, College Entrance Examinations, Error of Measurement
Reardon, Sean F.; Kalogrides, Demetra; Shores, Ken – Stanford Center for Education Policy Analysis, 2017
We estimate racial/ethnic achievement gaps in several hundred metropolitan areas and several thousand school districts in the United States using the results of roughly 200 million standardized math and reading tests administered to public school students from 2009-2013. We show that achievement gaps vary substantially, ranging from nearly 0 in…
Descriptors: Racial Differences, Achievement Gap, Scores, Standardized Tests
Foorman, Barbara R.; Petscher, Yaacov; Schatschneider, Chris – Florida Center for Reading Research, 2015
The grades K-2 Florida Center for Reading Research (FCRR) Reading Assessment (FRA) consists of computer-adaptive alphabetic and oral language screening tasks that provide a Probability of Literacy Success (PLS) linked to grade-level performance (i.e., the 40th percentile) on the word reading (in kindergarten) or reading comprehension (in grades…
Descriptors: Reading Instruction, Reading Tests, Kindergarten, Grade 1
Somers, Marie-Andrée; Zhu, Pei; Jacob, Robin; Bloom, Howard – MDRC, 2013
In this paper, we examine the validity and precision of two nonexperimental study designs (NXDs) that can be used in educational evaluation: the comparative interrupted time series (CITS) design and the difference-in-difference (DD) design. In a CITS design, program impacts are evaluated by looking at whether the treatment group deviates from its…
Descriptors: Research Design, Educational Assessment, Time, Intervals
Foorman, Barbara R.; Petscher, Yaacov; Schatschneider, Chris – Florida Center for Reading Research, 2015
The Florida Center for Reading Research (FCRR) Reading Assessment (FRA) consists of computer-adaptive reading comprehension and oral language screening tasks that provide measures to track growth over time, as well as a Probability of Literacy Success (PLS) linked to grade-level performance (i.e., the 50th percentile) on the reading comprehension…
Descriptors: Elementary School Students, Middle School Students, High School Students, Written Language
Sinharay, Sandip; Haberman, Shelby J.; Jia, Helena – Educational Testing Service, 2011
Standard 3.9 of the "Standards for Educational and Psychological Testing" (American Educational Research Association, American Psychological Association, & National Council for Measurement in Education, 1999) demands evidence of model fit when an item response theory (IRT) model is used to make inferences from a data set. We applied two recently…
Descriptors: Item Response Theory, Goodness of Fit, Statistical Analysis, Language Tests
Loveless, Tom – Brookings Institution, 2015
This 2015 Brown Center Report (BCR) represents the fourteenth edition of the series since the first issue was published in 2000. It includes three studies. Like all previous BCRs, the studies explore independent topics but share two characteristics: they are empirical and based on the best evidence available. The studies in this edition are on the…
Descriptors: Common Core State Standards, Academic Achievement, Gender Differences, Reading Achievement
Lai, Cheng-Fei; Irvin, P. Shawn; Park, Bitnara Jasmine; Alonzo, Julie; Tindal, Gerald – Behavioral Research and Teaching, 2012
In this technical report, we present the results of a reliability study of the third-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…
Descriptors: Grade 3, Curriculum Based Assessment, Educational Testing, Testing Programs
Park, Bitnara Jasmine; Irvin, P. Shawn; Lai, Cheng-Fei; Alonzo, Julie; Tindal, Gerald – Behavioral Research and Teaching, 2012
In this technical report, we present the results of a reliability study of the fifth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…
Descriptors: Grade 5, Curriculum Based Assessment, Educational Testing, Testing Programs
Park, Bitnara Jasmine; Irvin, P. Shawn; Alonzo, Julie; Lai, Cheng-Fei; Tindal, Gerald – Behavioral Research and Teaching, 2012
In this technical report, we present the results of a reliability study of the fourth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…
Descriptors: Grade 4, Curriculum Based Assessment, Educational Testing, Testing Programs
Irvin, P. Shawn; Alonzo, Julie; Park, Bitnara Jasmine; Lai, Cheng-Fei; Tindal, Gerald – Behavioral Research and Teaching, 2012
In this technical report, we present the results of a reliability study of the sixth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…
Descriptors: Grade 6, Grade 3, Curriculum Based Assessment, Educational Testing