NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 24 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Agley, Jon; Tidd, David; Jun, Mikyoung; Eldridge, Lori; Xiao, Yunyu; Sussman, Steve; Jayawardene, Wasantha; Agley, Daniel; Gassman, Ruth; Dickinson, Stephanie L. – Educational and Psychological Measurement, 2021
Prospective longitudinal data collection is an important way for researchers and evaluators to assess change. In school-based settings, for low-risk and/or likely-beneficial interventions or surveys, data quality and ethical standards are both arguably stronger when using a waiver of parental consent--but doing so often requires the use of…
Descriptors: Data Analysis, Longitudinal Studies, Data Collection, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Scholes, Vanessa – Educational Technology Research and Development, 2016
There are good reasons for higher education institutions to use learning analytics to risk-screen students. Institutions can use learning analytics to better predict which students are at greater risk of dropping out or failing, and use the statistics to treat "risky" students differently. This paper analyses this practice using…
Descriptors: Data Collection, Data Analysis, Educational Research, At Risk Students
Peer reviewed Peer reviewed
Direct linkDirect link
Lynch, Collin F. – Theory and Research in Education, 2017
Big Data can radically transform education by enabling personalized learning, deep student modeling, and true longitudinal studies that compare changes across classrooms, regions, and years. With these promises, however, come risks to individual privacy and educational validity, along with deep policy and ethical issues. Education is largely a…
Descriptors: Data Analysis, Data Collection, Privacy, Evaluation Methods
ACPA College Student Educators International, 2011
The Assessment Skills and Knowledge (ASK) standards seek to articulate the areas of content knowledge, skill and dispositions that student affairs professionals need in order to perform as practitioner-scholars to assess the degree to which students are mastering the learning and development outcomes the professionals intend. Consistent with…
Descriptors: Student Personnel Workers, Standards, Evaluation Methods, Data Collection
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Noland, Carey M. – Journal of Research Practice, 2012
When conducting research on sensitive topics, it is challenging to use new methods of data collection given the apprehensions of Institutional Review Boards (IRBs). This is especially worrying because sensitive topics of research often require novel approaches. In this article a brief personal history of navigating the IRB process for conducting…
Descriptors: Communication Research, Sexuality, Social Science Research, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Owen, John M. – Canadian Journal of Program Evaluation, 2010
This article describes an evaluation that was judged to be unsuccessful from the point of view of key program stakeholders. This was due to the fact that the evaluation did not support program advocates who had much to gain from positive evaluation findings. We argue that, although the knowledge needs of stakeholders must be taken into account,…
Descriptors: Evaluators, Program Evaluation, Integrity, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Lindahl, Mary W.; Unger, Michael L. – College Teaching, 2010
Student teaching evaluations (STEs) are increasingly used in the process of determining promotion and tenure. While most research has focused on career consequences, there has been little inquiry into the remarks students write at the end of the evaluation form. The structure of the collection process, involving emotional arousal and anonymity in…
Descriptors: Student Teachers, Student Evaluation of Teacher Performance, Teacher Evaluation, Student Teacher Attitudes
Peer reviewed Peer reviewed
Direct linkDirect link
diSessa, Andrea – Cognition and Instruction, 2007
Clinical interviewing is viewed here as a social interactional pattern in order to examine the nature and limits of the technique as a means of scientific data acquisition. I defend the technique against criticisms that it is ecologically suspect and prone to systematic biases, mainly due to influence of the interviewer on the interviewee or to…
Descriptors: Interaction, Ethics, Interviews, Data Collection
Peer reviewed Peer reviewed
Direct linkDirect link
Cooksy, Leslie J. – American Journal of Evaluation, 2007
All evaluators face the challenge of striving to adhere to the highest possible standards of ethical conduct. Translating the AEA's Guiding Principles and the Joint Committee's Program Evaluation Standards into everyday practice, however, can be a complex, uncertain, and frustrating endeavor. Moreover, acting in an ethical fashion can require…
Descriptors: Program Evaluation, Evaluators, Ethics, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Boruch, Robert – New Directions for Evaluation, 2007
Thomas Jefferson recognized the value of reason and scientific experimentation in the eighteenth century. This chapter extends the idea in contemporary ways to standards that may be used to judge the ethical propriety of randomized trials and the dependability of evidence on effects of social interventions.
Descriptors: Ethics, Standards, Evaluation Methods, Research Methodology
Peer reviewed Peer reviewed
Mark, Melvin M.; Eyssell, Kristen M.; Campbell, Bernadette – New Directions for Evaluation, 1999
Focuses on ethical issues that can arise in the collection and analysis of data in evaluations. Addresses four issues: (1) the application of cost-benefit thinking to judgments about research ethics, (2) the quality of research design, (3) the need to explore the data, and (4) the censoring of data. (Author/SLD)
Descriptors: Cost Effectiveness, Data Analysis, Data Collection, Ethics
Peer reviewed Peer reviewed
Sieloff, Debra A. – Performance Improvement, 1999
Explains the Bridge Evaluation Model that illustrates the process and factors that determine the outcome of an evaluation. Highlights include relationships between external factors including goals, interpersonal relationships, ethics, and politics; data collection and collation; and a sidebar that offers a case study involving the evaluation of…
Descriptors: Case Studies, Data Collection, Ethics, Evaluation Criteria
Peer reviewed Peer reviewed
LaFollette, Marcel C. – Society, 1994
How social science theory and insight could be applied to understanding and resolving the issues surrounding misconduct in scientific research is discussed. Understanding why scientists break the norms of acceptable conduct may come when their survey responses are interpreted in the contexts of sociology and psychology. (SLD)
Descriptors: Data Collection, Ethics, Evaluation Methods, Fraud
Peer reviewed Peer reviewed
Grover, Paul L.; Uguroglu, Margaret E. – Evaluation and the Health Professions, 1984
Ethical issues relating to naturalistic evaluation are addressed, focusing on the role of the evaluator, problems of privacy and data gathering techniques, and issues relating to the use/abuse of findings. Benefits and costs of the naturalistic approach to program evaluation are also identified. (EGS)
Descriptors: Classroom Observation Techniques, Data Collection, Ethics, Evaluation Methods
Rothstein, Richard; Jacobsen, Rebecca; Wilder, Tamara – Campaign for Educational Equity, Teachers College, Columbia University, 2008
This presentation summarizes two products of the ongoing work the authors are doing for the Campaign for Educational Equity. Part 1 of this presentation is a summary of "A Report Card on Comprehensive Equity: Racial Gaps in the Nation's Youth Outcomes." This report estimates the black-white achievement gaps in each of these aspects of…
Descriptors: Achievement Gap, Evaluation Methods, Youth Programs, Equal Education
Previous Page | Next Page ยป
Pages: 1  |  2