NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)3
Since 2006 (last 20 years)13
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Li, Feifei – ETS Research Report Series, 2017
An information-correction method for testlet-based tests is introduced. This method takes advantage of both generalizability theory (GT) and item response theory (IRT). The measurement error for the examinee proficiency parameter is often underestimated when a unidimensional conditional-independence IRT model is specified for a testlet dataset. By…
Descriptors: Item Response Theory, Generalizability Theory, Tests, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Arnesen, Anne; Braeken, Johan; Baker, Scott; Meek-Hansen, Wilhelm; Ogden, Terje; Melby-Lervåg, Monica – Reading Research Quarterly, 2017
This study investigated an adaptation of the Oral Reading Fluency (ORF) measure of the Dynamic Indicators of Basic Early Literacy Skills into a European context for the Norwegian language, which has a more transparent orthography than English. Second-order latent growth curve modeling was used to examine the longitudinal measurement invariance of…
Descriptors: Oral Reading, Reading Fluency, Elementary School Students, Longitudinal Studies
Phillips, Gary W. – American Institutes for Research, 2014
This paper describes a statistical linking between the 2011 National Assessment of Educational Progress (NAEP) in Grade 4 reading and the 2011 Progress in International Reading Literacy Study (PIRLS) in Grade 4 reading. The primary purpose of the linking study is to obtain a statistical comparison between NAEP (a national assessment) and PIRLS (an…
Descriptors: National Competency Tests, Reading Achievement, Comparative Analysis, Measures (Individuals)
Powers, Sonya; Li, Dongmei; Suh, Hongwook; Harris, Deborah J. – ACT, Inc., 2016
ACT reporting categories and ACT Readiness Ranges are new features added to the ACT score reports starting in fall 2016. For each reporting category, the number correct score, the maximum points possible, the percent correct, and the ACT Readiness Range, along with an indicator of whether the reporting category score falls within the Readiness…
Descriptors: Scores, Classification, College Entrance Examinations, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Taylor, Melinda Ann; Pastor, Dena A. – Applied Measurement in Education, 2013
Although federal regulations require testing students with severe cognitive disabilities, there is little guidance regarding how technical quality should be established. It is known that challenges exist with documentation of the reliability of scores for alternate assessments. Typical measures of reliability do little in modeling multiple sources…
Descriptors: Generalizability Theory, Alternative Assessment, Test Reliability, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Sideridis, Georgios D.; Tsaousis, Ioannis; Al-harbi, Khaleel A. – Journal of Psychoeducational Assessment, 2015
The purpose of the present study was to extend the model of measurement invariance by simultaneously estimating invariance across multiple populations in the dichotomous instrument case using multi-group confirmatory factor analytic and multiple indicator multiple causes (MIMIC) methodologies. Using the Arabic version of the General Aptitude Test…
Descriptors: Semitic Languages, Aptitude Tests, Error of Measurement, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Oh, Hyeonjoo; Moses, Tim – Journal of Educational Measurement, 2012
This study investigated differences between two approaches to chained equipercentile (CE) equating (one- and bi-direction CE equating) in nearly equal groups and relatively unequal groups. In one-direction CE equating, the new form is linked to the anchor in one sample of examinees and the anchor is linked to the reference form in the other…
Descriptors: Equated Scores, Statistical Analysis, Comparative Analysis, Differences
Stephens, Christopher Neil – ProQuest LLC, 2012
Augmentation procedures are designed to provide better estimates for a given test or subtest through the use of collateral information. The main purpose of this dissertation was to use Haberman's and Wainer's augmentation procedures on a large-scale, standardized achievement test to understand the relationship between reliability and…
Descriptors: Psychometrics, Error of Measurement, Scores, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Boyd, Donald; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James – Journal of Educational and Behavioral Statistics, 2013
Test-based accountability as well as value-added asessments and much experimental and quasi-experimental research in education rely on achievement tests to measure student skills and knowledge. Yet, we know little regarding fundamental properties of these tests, an important example being the extent of measurement error and its implications for…
Descriptors: Accountability, Educational Research, Educational Testing, Error of Measurement
Irvin, P. Shawn; Alonzo, Julie; Lai, Cheng-Fei; Park, Bitnara Jasmine; Tindal, Gerald – Behavioral Research and Teaching, 2012
In this technical report, we present the results of a reliability study of the seventh-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…
Descriptors: Reading Comprehension, Testing Programs, Statistical Analysis, Grade 7
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ghafournia, Narjes; Afghari, Akbar – English Language Teaching, 2013
The study scrutinized the probable interaction between using cognitive test-taking strategies, reading proficiency, and reading comprehension test performance of Iranian postgraduate students, who studied English as a foreign language. The study also probed the extent to which the participants' test performance was related to the use of certain…
Descriptors: Foreign Countries, Reading Comprehension, Reading Tests, English (Second Language)
Alonzo, Julie; Liu, Kimy; Tindal, Gerald – Behavioral Research and Teaching, 2008
This technical report describes the development of reading comprehension assessments designed for use as progress monitoring measures appropriate for 2nd Grade students. The creation, piloting, and technical adequacy of the measures are presented. The following are appended: (1) Item Specifications for MC [Multiple Choice] Comprehension - Passage…
Descriptors: Reading Comprehension, Reading Tests, Grade 2, Elementary School Students
Alonzo, Julie; Tindal, Gerald – Behavioral Research and Teaching, 2008
This technical report describes the development and piloting of reading comprehension measures developed for use by fifth-grade students as part of an online progress monitoring assessment system, http://easycbm.com. Each comprehension measure is comprised of an original work of narrative fiction approximately 1500 words in length followed by 20…
Descriptors: Reading Comprehension, Reading Tests, Grade 5, Multiple Choice Tests
Rentz, R. Robert; Bashaw, W. L. – 1975
In order to determine if Rasch Model procedures have any utility for equating pre-existing tests, this study reanalyzed the data from the equating phase of the Anchor Test Study which used a variety of equipercentile and linear model methods. The tests involved included seven reading test batteries, each having from one to three levels and two…
Descriptors: Comparative Analysis, Elementary Education, Equated Scores, Error of Measurement
Rentz, R. Robert; Bashaw, W. L. – 1975
This volume contains tables of item analysis results obtained by following procedures associated with the Rasch Model for those reading tests used in the Anchor Test Study. Appendix I gives the test names and their corresponding analysis code numbers. Section I (Basic Item Analyses) presents data for the item analysis of each test in a two part…
Descriptors: Comparative Analysis, Elementary Education, Equated Scores, Error of Measurement