Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 6 |
Descriptor
Evaluation Methods | 8 |
Accountability | 4 |
Student Evaluation | 4 |
Item Response Theory | 3 |
Models | 3 |
Scores | 3 |
Adaptive Testing | 2 |
Computer Assisted Testing | 2 |
Correlation | 2 |
Critical Thinking | 2 |
Educational Testing | 2 |
More ▼ |
Source
Educational Testing Service | 8 |
Author
Davey, Tim | 2 |
Stone, Elizabeth | 2 |
Alexiou, Jon J. | 1 |
Barton, Paul E. | 1 |
Cook, Linda | 1 |
Deane, Paul | 1 |
Dwyer, Carol A. | 1 |
Herbert, Erin | 1 |
Millett, Catherine M. | 1 |
O'Reilly, Tenaha | 1 |
Payne, David G. | 1 |
More ▼ |
Publication Type
Reports - Research | 4 |
Reports - Evaluative | 3 |
Information Analyses | 1 |
Education Level
Elementary Secondary Education | 4 |
Higher Education | 2 |
Grade 8 | 1 |
High Schools | 1 |
Audience
Practitioners | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Stone, Elizabeth; Cook, Linda – Educational Testing Service, 2009
Research studies have shown that a smaller percentage of students with learning disabilities participate in state assessments than do their peers without learning disabilities. Furthermore, there is almost always a performance gap between these groups of students on these assessments. It is important to evaluate whether a performance gap on a…
Descriptors: Learning Disabilities, State Standards, Educational Testing, Science Tests
Stone, Elizabeth; Davey, Tim – Educational Testing Service, 2011
There has been an increased interest in developing computer-adaptive testing (CAT) and multistage assessments for K-12 accountability assessments. The move to adaptive testing has been met with some resistance by those in the field of special education who express concern about routing of students with divergent profiles (e.g., some students with…
Descriptors: Disabilities, Adaptive Testing, Accountability, Computer Assisted Testing
Rijmen, Frank – Educational Testing Service, 2009
Three multidimensional item response theory (IRT) models for testlet-based tests are described. In the bifactor model (Gibbons & Hedeker, 1992), each item measures a general dimension in addition to a testlet-specific dimension. The testlet model (Bradlow, Wainer, & Wang, 1999) is a bifactor model in which the loadings on the specific dimensions…
Descriptors: Item Response Theory, Models, Graphs, Comparative Analysis
Deane, Paul – Educational Testing Service, 2011
This paper presents a socio-cognitive framework for connecting writing pedagogy and writing assessment with modern social and cognitive theories of writing. It focuses on providing a general framework that highlights the connections between writing competency and other literacy skills; identifies key connections between literacy instruction,…
Descriptors: Writing (Composition), Writing Evaluation, Writing Tests, Cognitive Ability
A Culture of Evidence: An Evidence-Centered Approach to Accountability for Student Learning Outcomes
Millett, Catherine M.; Payne, David G.; Dwyer, Carol A.; Stickler, Leslie M.; Alexiou, Jon J. – Educational Testing Service, 2008
This paper presents a framework that institutions of higher education can use to improve, revise and introduce comprehensive systems for the collection and dissemination of information on student learning outcomes. For faculty and institutional leaders grappling with the many issues and nuances inherent in assessing student learning, the framework…
Descriptors: Higher Education, Educational Testing, Accountability, Outcomes of Education
O'Reilly, Tenaha; Sheehan, Kathleen M. – Educational Testing Service, 2009
This paper presents the rationale and research base for a reading competency model designed to guide the development of cognitively based assessment of reading comprehension. The model was developed from a detailed review of the cognitive research on reading and learning and a review of state standards for language arts. A survey of the literature…
Descriptors: Reading Skills, Reading Comprehension, Speech, State Standards
Rizavi, Saba; Way, Walter D.; Davey, Tim; Herbert, Erin – Educational Testing Service, 2004
Item parameter estimates vary for a variety of reasons, including estimation error, characteristics of the examinee samples, and context effects (e.g., item location effects, section location effects, etc.). Although we expect variation based on theory, there is reason to believe that observed variation in item parameter estimates exceeds what…
Descriptors: Adaptive Testing, Test Items, Computation, Context Effect
Barton, Paul E. – Educational Testing Service, 2004
The purpose of this report is to help in the evolution of these systems by examining the measures used, including, but not limited to, tests. The author asks: Are these the best measures? Are they used right? Are there other measures that should be employed? It is the model of reform itself that is examined, and the report does not address…
Descriptors: Academic Achievement, Educational Change, Academic Standards, Student Evaluation