NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers14
Practitioners4
Policymakers2
Teachers1
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Benjawan Plengkham; Sonthaya Rattanasak; Patsawut Sukserm – Journal of Education and Learning, 2025
This academic article provides the essential steps for designing an effective English questionnaire in social science research, with a focus on ensuring clarity, cultural sensitivity and ethical integrity. Developed from key insights from related studies, it outlines potential practice in questionnaire design, item development and the importance…
Descriptors: Guidelines, Test Construction, Questionnaires, Surveys
Peer reviewed Peer reviewed
Direct linkDirect link
Swank, Jacqueline M.; Mullen, Patrick R. – Measurement and Evaluation in Counseling and Development, 2017
The article serves as a guide for researchers in developing evidence of validity using bivariate correlations, specifically construct validity. The authors outline the steps for calculating and interpreting bivariate correlations. Additionally, they provide an illustrative example and discuss the implications.
Descriptors: Correlation, Construct Validity, Guidelines, Data Interpretation
Peer reviewed Peer reviewed
Direct linkDirect link
Tannenbaum, Richard J.; Cho, Yeonsuk – Language Assessment Quarterly, 2014
In this article, we consolidate and present in one place what is known about quality indicators for setting standards so that stakeholders may be able to recognize the signs of standard-setting quality. We use the context of setting standards to associate English language test scores with language proficiency descriptions such as those presented…
Descriptors: Standard Setting, Language Tests, Scores, English (Second Language)
Coalition for Evidence-Based Policy, 2014
This guide is addressed to policy officials, program providers, and researchers who are seeking to: (1) identify and implement social programs backed by valid evidence of effectiveness; or (2) sponsor or conduct an evaluation to determine whether a program is effective. The guide provides a brief overview of which studies can produce valid…
Descriptors: Program Effectiveness, Program Design, Evidence, Social Work
Peer reviewed Peer reviewed
Direct linkDirect link
Beglar, David – Language Testing, 2010
The primary purpose of this study was to provide preliminary validity evidence for a 140-item form of the Vocabulary Size Test, which is designed to measure written receptive knowledge of the first 14,000 words of English. Nineteen native speakers of English and 178 native speakers of Japanese participated in the study. Analyses based on the Rasch…
Descriptors: Test Items, Native Speakers, Test Validity, Vocabulary
Peer reviewed Peer reviewed
Direct linkDirect link
Beers, Pieter J.; Boshuizen, Henny P. A.; Kirschner, Paul A.; Gijselaers, Wim H. – Learning and Instruction, 2007
CSCL research has given rise to a plethora of analysis methods, all with specific analysis goals, units of analysis, and for specific types of data (chat, threaded discussions, etc.). This article describes some challenges of CSCL-analysis. The development of an analysis method for negotiation processes in multidisciplinary teams serves as an…
Descriptors: Content Analysis, Computer Mediated Communication, Research Methodology, Data Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Mondi, Makingu; Woods, Peter; Rafi, Ahmad – Educational Technology & Society, 2008
This study investigates "how and why" students' "Uses and Gratification Expectancy" (UGE) for e-learning resources influences their "Perceived e-Learning Experience." A "Uses and Gratification Expectancy Model" (UGEM) framework is proposed to predict students' "Perceived e-Learning Experience," and…
Descriptors: Research Design, Learning Strategies, Learning Experience, Educational Resources
Peer reviewed Peer reviewed
Direct linkDirect link
McNamara, James F. – International Journal of Educational Reform, 2004
This article is the third contribution to a research methods series dedicated to getting good results from survey research. In this series, "good results" is a stenographic term used to define surveys that yield accurate and meaningful information that decision makers can use with confidence when conducting program evaluation and policy assessment…
Descriptors: Surveys, Questionnaires, Guidelines, Material Development
Peer reviewed Peer reviewed
Mostert, Mark P. – Learning Disabilities Research and Practice, 1996
This article proposes a set of criteria for reporting meta-analyses of topics in learning disabilities. Application of the criteria to examples of published meta-analyses reveals the wide variation in amount of reported data, which could influence the summative results of meta-analyses and subsequent judgment of face validity. (Author/DB)
Descriptors: Guidelines, Learning Disabilities, Literature Reviews, Meta Analysis
Owston, Ronald D.; Dudley-Marling, Curt – 1986
The overall poor quality of educational software on the market suggests that educators must continue efforts to evaluate available packages and to disseminate their findings. In this paper, weaknesses in published evaluation procedures are identified, and an alternative model, the York Educational Software Evaluation Scale (YESES), is described.…
Descriptors: Computer Software, Correlation, Elementary Secondary Education, Evaluation Criteria
Peer reviewed Peer reviewed
Mayer, Hanna – Canadian Journal of Educational Communication, 1986
Describes three basic approaches to needs assessment--problem identification, problem analysis, and problem verification--and assesses them in terms of their potential utility to users in business and educational settings as measured by their estimated time requirement, accuracy, and cost. Needs assessment data collection resources and techniques…
Descriptors: Comparative Analysis, Costs, Data Collection, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Ross, Steven M.; Morrison, Gary R.; Lowther, Deborah L. – Journal of Computing in Higher Education, 2005
Experimental methods have been used extensively for many years to conduct research in education and psychology. However, applications of experiments to investigate technology and other instructional innovations in higher education settings have been relatively limited. The present paper examines ways in which experiments can be used productively…
Descriptors: Higher Education, Experiments, Validity, Research Design
Peer reviewed Peer reviewed
Baggaley, Jon – Canadian Journal of Educational Communication, 1987
Discusses reliability and validity of continual response measurement (CRM), a computer-based measurement technique, and its use in social science research. Highlights include the importance of criterion-referencing the data, guidelines for designing studies using CRM, examples typifying their deductive and inductive functions, and a discussion of…
Descriptors: Computer Oriented Programs, Deduction, Formative Evaluation, Graphs
Peer reviewed Peer reviewed
Direct linkDirect link
McNamara, James F. – International Journal of Educational Reform, 2003
This article is the first of a research methods series dedicated to getting good results from survey research. In this series, "good results" is a stenographic term used to define surveys that yield accurate and meaningful information decision makers can use with confidence to identify current practices that merit continuation and to create or…
Descriptors: Research Design, Research Methodology, Guidelines, Educational Research