NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 151 to 165 of 2,462 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Montgomery, Alyssa; Dumont, Ron; Willis, John O. – Journal of Psychoeducational Assessment, 2017
The articles presented in this Special Issue provide evidence for many statistically significant relationships among error scores obtained from the Kaufman Test of Educational Achievement, Third Edition (KTEA)-3 between various groups of students with and without disabilities. The data reinforce the importance of examiners looking beyond the…
Descriptors: Evidence, Validity, Predictive Validity, Error Patterns
Peer reviewed Peer reviewed
Direct linkDirect link
Elliott, Julian G.; Resing, Wilma C. M.; Beckmann, Jens F. – Educational Review, 2018
This paper updates a review of dynamic assessment in education by the first author, published in this journal in 2003. It notes that the original review failed to examine the important conceptual distinction between dynamic testing (DT) and dynamic assessment (DA). While both approaches seek to link assessment and intervention, the former is of…
Descriptors: Alternative Assessment, Educational Assessment, Testing, Intervention
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Anderson, Robin D.; Curtis, Nicolas A. – Research & Practice in Assessment, 2017
Ten years ago, "Research & Practice in Assessment" (RPA) was born, providing an outlet for assessment-related research. Since that first winter issue, assessment research and practice has evolved. Like with many evolutions, the assessment practice evolution is best described as a change of emphasis as opposed to a radical revolution.…
Descriptors: Theory Practice Relationship, Educational Practices, Evaluation Research, Educational Development
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Chen-Wei; Wang, Wen-Chung – Journal of Educational Measurement, 2017
The examinee-selected-item (ESI) design, in which examinees are required to respond to a fixed number of items in a given set of items (e.g., choose one item to respond from a pair of items), always yields incomplete data (i.e., only the selected items are answered and the others have missing data) that are likely nonignorable. Therefore, using…
Descriptors: Item Response Theory, Models, Maximum Likelihood Statistics, Data Analysis
Phelps, Richard P. – Pioneer Institute for Public Policy Research, 2016
The Thomas B. Fordham Institute has released a report, "Evaluating the Content and Quality of Next Generation Assessments," ostensibly an evaluative comparison of four testing programs, the Common Core derived SBAC and PARCC, ACT's Aspire, and the Commonwealth of Massachusetts' MCAS. Of course, anyone familiar with Fordham's past work…
Descriptors: Evaluation Methods, Tests, Evaluation Research, Standardized Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Lawrenz, Frances P.; Nelson, Amy Grack; Causey, Lauren; Kollmann, Liz Kunz; King, Jean A.; Cohn, Sarah – AERA Online Paper Repository, 2016
This presentation reports the results from a three year research project examining evaluation capacity building in a complex adaptive system (CAS) and using CAS as a lens for the analysis. The NSF funded Complex Adaptive Systems as a Model for Network Evaluations (CASNET) project provides new insights on (1) the implications of complexity theory…
Descriptors: Capacity Building, Teamwork, Systems Approach, Inquiry
Hosp, John L.; Ford, Jeremy W.; Huddle, Sally M.; Hensley, Kiersten K. – Assessment for Effective Intervention, 2018
Replication is a foundation of the development of a knowledge base in an evidence-based field such as education. This study includes two direct replications of Hosp, Hensley, Huddle, and Ford which found evidence of criterion-related validity of curriculum-based measurement (CBM) for reading and mathematics with postsecondary students with…
Descriptors: Replication (Evaluation), Evaluation Research, Curriculum Based Assessment, Developmental Disabilities
Peer reviewed Peer reviewed
Direct linkDirect link
Rootman-le Grange, Ilse; Blackie, Margaret A. L. – Chemistry Education Research and Practice, 2018
The challenge of supporting the development of meaningful learning is prevalent in chemistry education research. One of the core activities used in the learning process is assessments. The aim of this paper is to illustrate how the semantics dimension of Legitimation Code Theory can be a helpful tool to critique the quality of assessments and…
Descriptors: Chemistry, Educational Research, Educational Quality, Educational Assessment
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Nejad, Ali Mansouri; Pakdel, Farhad; Khansir, Ali Akbar – Educational Process: International Journal, 2019
The existing gap between research and practice in language testing has posed a huge challenge to language teachers. In particular, this study intended to examine language testing research and classroom testing activities for their degree of interaction from Iranian EFL teachers' points of view. The analysis drew on the questionnaire developed by…
Descriptors: English (Second Language), Second Language Instruction, Language Teachers, Teacher Attitudes
Peer reviewed Peer reviewed
Direct linkDirect link
Stringer, Phil – Educational Review, 2018
This paper reports on what has happened since Elliott ("Dynamic Assessment in Educational Settings: Realising Potential," 2003) in those applications of dynamic assessment that he considered. There continues to be two broad applications, one, largely researcher led, and the other, largely practitioner led, although there are examples of…
Descriptors: Alternative Assessment, Educational Assessment, Research and Development, Theory Practice Relationship
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dogan, Enis – Practical Assessment, Research & Evaluation, 2018
Several large scale assessments include student, teacher, and school background questionnaires. Results from such questionnaires can be reported for each item separately, or as indices based on aggregation of multiple items into a scale. Interpreting scale scores is not always an easy task though. In disseminating results of achievement tests, one…
Descriptors: Rating Scales, Benchmarking, Questionnaires, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Roberts, Darby – New Directions for Institutional Research, 2015
This chapter explores the opportunities and challenges of using direct methods to measure co-curricular learning.
Descriptors: Outcome Measures, Educational Opportunities, Performance Factors, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Tranquist, Joakim – American Journal of Evaluation, 2015
In the vast evaluation literature, there are numerous accounts describing the emergence of the field of evaluation. However, texts on evaluation history often describe how structural conditions for conducting evaluation have changed over time, often from an American perspective. Inspired by the Oral History Team, the purpose of this article is to…
Descriptors: Foreign Countries, Oral History, Evaluation Research, Researchers
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Xiaofeng Steven – Measurement and Evaluation in Counseling and Development, 2015
Researchers who need to explain treatment effects to laypeople can translate Cohen's effect size (standardized mean difference) to a common language effect size--a probability of a random observation from one population being larger than a random observation from the other population. This common language effect size can be extended to represent…
Descriptors: Effect Size, Outcomes of Treatment, Language Usage, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Rodham, K.; Fox, F.; Doran, N. – International Journal of Social Research Methodology, 2015
Typically authors explain how they conduct interpretative phenomenological analysis (IPA), but fail to explain how they ensured that their analytical process was trustworthy. For example, a minority mention that they 'reached consensus' after having engaged in a shared analysis of the data, but do not explain "how" they did so. In this…
Descriptors: Credibility, Group Unity, Phenomenology, Evaluation Research
Pages: 1  |  ...  |  7  |  8  |  9  |  10  |  11  |  12  |  13  |  14  |  15  |  ...  |  165