NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
What Works Clearinghouse Rating
Showing 1 to 15 of 31 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kylie Anglin – Society for Research on Educational Effectiveness, 2022
Background: For decades, education researchers have relied on the work of Campbell, Cook, and Shadish to help guide their thinking about valid impact estimates in the social sciences (Campbell & Stanley, 1963; Shadish et al., 2002). The foundation of this work is the "validity typology" and its associated "threats to…
Descriptors: Artificial Intelligence, Educational Technology, Technology Uses in Education, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Rutten, Roel – Sociological Methods & Research, 2022
Applying qualitative comparative analysis (QCA) to large Ns relaxes researchers' case-based knowledge. This is problematic because causality in QCA is inferred from a dialogue between empirical, theoretical, and case-based knowledge. The lack of case-based knowledge may be remedied by various robustness tests. However, being a case-based method,…
Descriptors: Comparative Analysis, Correlation, Case Studies, Attribution Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Braun, Henry – British Journal of Educational Psychology, 2019
Background: There is unrealized potential in higher education for greater use of performance assessment, particularly in support of teaching and learning: Well-designed performance tasks can elicit evidence regarding what students know and can do with respect to complex learning objectives. At the same time, there is some pressure, at least in the…
Descriptors: Performance Based Assessment, Higher Education, Test Format, Standardized Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Ropovik, Ivan; Greger, David – Psychology in the Schools, 2023
Motivation and self-concept count among the educationally most relevant factors and the evaluation of many educational interventions requires their valid measurement. The present study examined the psychometric properties of a shortened version of the Students' Approaches to Learning questionnaire measuring 10 distinct motivational and…
Descriptors: Measurement, Learning Motivation, Intervention, Self Concept
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gencer, Ayse Savran; Dogan, Hilmi – International Journal of Assessment Tools in Education, 2020
Critical thinking has been one of the 21st-century skills consistently associated with students' future career advancement as a positive student outcome of STEM education. The aim of the study is to develop and validate science critical thinking skill instruments to assess the improvement in the subject of living organisms and force and friction…
Descriptors: Critical Thinking, Thinking Skills, STEM Education, Career Development
Peer reviewed Peer reviewed
Direct linkDirect link
Ercikan, Kadriye; Oliveri, María Elena – Applied Measurement in Education, 2016
Assessing complex constructs such as those discussed under the umbrella of 21st century constructs highlights the need for a principled assessment design and validation approach. In our discussion, we made a case for three considerations: (a) taking construct complexity into account across various stages of assessment development such as the…
Descriptors: Evaluation Methods, Test Construction, Design, Scaling
Peer reviewed Peer reviewed
Direct linkDirect link
Kane, Michael T. – Journal of Educational Measurement, 2013
To validate an interpretation or use of test scores is to evaluate the plausibility of the claims based on the scores. An argument-based approach to validation suggests that the claims based on the test scores be outlined as an argument that specifies the inferences and supporting assumptions needed to get from test responses to score-based…
Descriptors: Test Interpretation, Validity, Scores, Test Use
Peer reviewed Peer reviewed
Direct linkDirect link
Harrison, Judith R.; State, Talida M.; Evans, Steven W.; Schamberg, Terah – Journal of Positive Behavior Interventions, 2016
The purpose of this study was to evaluate the construct and predictive validity of scores on a measure of social acceptability of class-wide and individual student intervention, the School Intervention Rating Form (SIRF), with high school teachers. Utilizing scores from 158 teachers, exploratory factor analysis revealed a three-factor (i.e.,…
Descriptors: Construct Validity, Predictive Validity, Likert Scales, Intervention
Davison, Mark L.; Biancarosa, Gina; Carlson, Sarah E.; Seipel, Ben; Liu, Bowen – Assessment for Effective Intervention, 2018
The computer-administered Multiple-Choice Online Causal Comprehension Assessment (MOCCA) for Grades 3 to 5 has an innovative, 40-item multiple-choice structure in which each distractor corresponds to a comprehension process upon which poor comprehenders have been shown to rely. This structure requires revised thinking about measurement issues…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Pilot Projects, Measurement
Davison, Mark L.; Biancarosa, Gina; Carlson, Sarah E.; Seipel, Ben; Liu, Bowen – Grantee Submission, 2018
The computer-administered Multiple-Choice Online Causal Comprehension Assessment (MOCCA) for Grades 3 to 5 has an innovative, 40-item multiple-choice structure in which each distractor corresponds to a comprehension process upon which poor comprehenders have been shown to rely. This structure requires revised thinking about measurement issues…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Pilot Projects, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Kane, Michael – Language Testing, 2012
The argument-based approach to validation involves two steps; specification of the proposed interpretations and uses of the test scores as an interpretive argument, and the evaluation of the plausibility of the proposed interpretive argument. More ambitious interpretations and uses tend to involve an extended network of inferences and assumptions…
Descriptors: Testing, Language Tests, Inferences, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Tieso, Carol L.; Hutcheson, Virginia H. – Planning and Changing, 2014
The authors of this article review the development and discuss potential uses for a new instrument that evolved from follow-up research conducted after completion of a five-year study of innovative curricular and instructional practices. The instrument is A Stakeholder's Perceptions of Innovative Reform Efforts (ASPIRE). The primary purpose of…
Descriptors: Probability, Educational Change, Success, Teacher Surveys
Peer reviewed Peer reviewed
Direct linkDirect link
Newton, Paul E. – Measurement: Interdisciplinary Research and Perspectives, 2012
The 1999 "Standards for Educational and Psychological Testing" defines validity as the degree to which evidence and theory support the interpretations of test scores entailed by proposed uses of tests. Although quite explicit, there are ways in which this definition lacks precision, consistency, and clarity. The history of validity has taught us…
Descriptors: Evidence, Validity, Educational Testing, Risk
Peer reviewed Peer reviewed
Direct linkDirect link
Kettler, Ryan J. – Review of Research in Education, 2015
This chapter introduces theory that undergirds the role of testing adaptations in assessment, provides examples of item modifications and testing accommodations, reviews research relevant to each, and introduces a new paradigm that incorporates opportunity to learn (OTL), academic enablers, testing adaptations, and inferences that can be made from…
Descriptors: Meta Analysis, Literature Reviews, Testing, Testing Accommodations
Peer reviewed Peer reviewed
Direct linkDirect link
Crisp, Victoria; Shaw, Stuart – Educational Studies, 2012
Validity is a central principle of assessment relating to the appropriateness of the uses and interpretations of test results. Usually, one of the inferences that we wish to make is that the score reflects the extent of a student's learning in a given domain. Thus, it is important to establish that the assessment tasks elicit performances that…
Descriptors: Test Results, Evaluation Methods, Construct Validity, Validity
Previous Page | Next Page »
Pages: 1  |  2  |  3