NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers4
Laws, Policies, & Programs
Assessments and Surveys
Schools and Staffing Survey…1
What Works Clearinghouse Rating
Showing 1 to 15 of 17 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kylie Anglin – Society for Research on Educational Effectiveness, 2022
Background: For decades, education researchers have relied on the work of Campbell, Cook, and Shadish to help guide their thinking about valid impact estimates in the social sciences (Campbell & Stanley, 1963; Shadish et al., 2002). The foundation of this work is the "validity typology" and its associated "threats to…
Descriptors: Artificial Intelligence, Educational Technology, Technology Uses in Education, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Burton, Laura J.; Mazerolle, Stephanie M. – Athletic Training Education Journal, 2011
Context: Instrument validation is an important facet of survey research methods and athletic trainers must be aware of the important underlying principles. Objective: To discuss the process of survey development and validation, specifically the process of construct validation. Background: Athletic training researchers frequently employ the use of…
Descriptors: Athletics, Research Methodology, Construct Validity, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Myers, Nicholas D.; Ahn, Soyeon; Jin, Ying – Research Quarterly for Exercise and Sport, 2011
Monte Carlo methods can be used in data analytic situations (e.g., validity studies) to make decisions about sample size and to estimate power. The purpose of using Monte Carlo methods in a validity study is to improve the methodological approach within a study where the primary focus is on construct validity issues and not on advancing…
Descriptors: Sample Size, Monte Carlo Methods, Construct Validity, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Leech, Nancy L.; Dellinger, Amy B.; Brannagan, Kim B.; Tanaka, Hideyuki – Journal of Mixed Methods Research, 2010
The purpose of this article is to demonstrate application of a new framework, the validation framework (VF), to assist researchers in evaluating mixed research studies. Based on an earlier work by Dellinger and Leech, a description of the VF is delineated. Using the VF, three studies from education, health care, and counseling fields are…
Descriptors: Educational Research, Researchers, Research Methodology, Research Reports
Peer reviewed Peer reviewed
Direct linkDirect link
Bowler, Mark C.; Woehr, David J. – Journal of Vocational Behavior, 2009
Recent Monte Carlo research has illustrated that the traditional method for assessing the construct-related validity of assessment center (AC) post-exercise dimension ratings (PEDRs), an application of confirmatory factor analysis (CFA) to a multitrait-multimethod matrix, produces inconsistent results [Lance, C. E., Woehr, D. J., & Meade, A. W.…
Descriptors: Monte Carlo Methods, Multitrait Multimethod Techniques, Construct Validity, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Harris, Lois R.; Brown, Gavin T. L. – Practical Assessment, Research & Evaluation, 2010
Structured questionnaires and semi-structured interviews are often used in mixed method studies to generate confirmatory results despite differences in methods of data collection, analysis, and interpretation. A review of 19 questionnaire-interview comparison studies found that consensus and consistency statistics were generally weak between…
Descriptors: Research Methodology, Questionnaires, Interviews, Data Collection
Peer reviewed Peer reviewed
Direct linkDirect link
Courvoisier, Delphine S.; Nussbeck, Fridtjof W.; Eid, Michael; Geiser, Christian; Cole, David A. – Psychological Assessment, 2008
The analysis of convergent and discriminant validity is an integral part of the construct validation process. Models for analyzing the convergent and discriminant validity have typically been developed for cross-sectional data. There exist, however, only a few approaches for longitudinal data that can be applied for analyzing the construct…
Descriptors: Multitrait Multimethod Techniques, Construct Validity, Validity, Anxiety
Peer reviewed Peer reviewed
Direct linkDirect link
Whetton, Chris; Twist, Liz; Sainsbury, Marian – British Educational Research Journal, 2007
Hilton (2006) criticises the PIRLS (Progress in International Reading Literacy Study) tests and the survey conduct, raising questions about the validity of international surveys of reading. Her criticisms fall into four broad areas: cultural validity, methodological issues, construct validity and the survey in England. However, her criticisms are…
Descriptors: Research Methodology, International Studies, Construct Validity, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Dellinger, Amy B.; Leech, Nancy L. – Journal of Mixed Methods Research, 2007
The primary purpose of this article is to further discussions of validity in mixed methods research by introducing a validation framework to guide thinking about validity in this area. To justify the use of this framework, the authors discuss traditional terminology and validity criteria for quantitative and qualitative research, as well as…
Descriptors: Qualitative Research, Methods Research, Validity, Vocabulary
Peer reviewed Peer reviewed
Direct linkDirect link
Kortsch, Gabrielle; Kurtines, William M.; Montgomery, Marilyn J. – Journal of Adolescent Research, 2008
The study reported in this paper, a Multistage Longitudinal Comparative (MLC) Design Stage II evaluation conducted as a planned preliminary efficacy evaluation (psychometric evaluation of measures, short-term controlled outcome studies, etc.) of the Changing Lives Program (CLP), provided evidence for the reliability and validity of qualitative…
Descriptors: Quasiexperimental Design, Intervention, Research Methodology, Validity
Peer reviewed Peer reviewed
Chen, Huey-Tsyh; Ross, Peter H. – Evaluation and Program Planning, 1987
A theory-driven approach to validity is proposed. The central argument is that a model or theory should be formulated in a program evaluation and the modeling process should include the identification of potential threats to validity in research. (Author/LMO)
Descriptors: Construct Validity, Program Evaluation, Research Methodology, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Thomas, Andrew – International Journal of Leadership in Education, 2007
This paper examines issues arising from the use of self-report questionnaires in cross-cultural contexts. The research draws from the extensive literature on cross-cultural leadership in business organizational culture as well as from educational cross-cultural contexts. It examines claims, drawn from business and educational contexts, that many…
Descriptors: Research Papers (Students), Organizational Culture, Research Methodology, Construct Validity
Lather, Patti – 1986
This paper focuses on issues of data trustworthiness in praxis-oriented empirical work, research openly committed to the building of a world in which we can all flourish. The central argument is that those exploring the possibilities for a change-enhancing advocacy paradigm for doing empirical research in the human sciences must begin to be more…
Descriptors: Construct Validity, Credibility, Data, Educational Research
Peer reviewed Peer reviewed
Gehlert, Sarah; Chang, Chih-Hung; Hartlage, Shirley – Journal of Outcome Measurement, 1997
The Rasch method, so often used in educational research, was used to analyze the construct validity of the criteria for premenstrual dysphoric disorder from "The Diagnostic and Statistical Manual of Mental Disorders." The method was used in contrast to using an external validation to establish the diagnostic utility of tests for…
Descriptors: Clinical Diagnosis, Construct Validity, Criteria, Diagnostic Tests
Peer reviewed Peer reviewed
Yeaton, William H.; Sechrest, Lee – Evaluation Review, 1986
The central thesis of this article is that the process of eliminating validity threats depends fundamentally on no-difference findings, a fact that has not been made explicit by researchers. The implications of this neglect are explored using examples from a number of different substantive areas such as psychology, health, and medicine.…
Descriptors: Attrition (Research Studies), Construct Validity, Generalizability Theory, Hypothesis Testing
Previous Page | Next Page ยป
Pages: 1  |  2