NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
No Child Left Behind Act 20015
What Works Clearinghouse Rating
Showing 1 to 15 of 92 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Daria Gerasimova – Journal of Educational Measurement, 2024
I propose two practical advances to the argument-based approach to validity: developing a living document and incorporating preregistration. First, I present a potential structure for the living document that includes an up-to-date summary of the validity argument. As the validation process may span across multiple studies, the living document…
Descriptors: Validity, Documentation, Methods, Research Reports
Peer reviewed Peer reviewed
Direct linkDirect link
Gregory Chernov – Evaluation Review, 2025
Most existing solutions to the current replication crisis in science address only the factors stemming from specific poor research practices. We introduce a novel mechanism that leverages the experts' predictive abilities to analyze the root causes of replication failures. It is backed by the principle that the most accurate predictor is the most…
Descriptors: Replication (Evaluation), Prediction, Scientific Research, Failure
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Benjawan Plengkham; Sonthaya Rattanasak; Patsawut Sukserm – Journal of Education and Learning, 2025
This academic article provides the essential steps for designing an effective English questionnaire in social science research, with a focus on ensuring clarity, cultural sensitivity and ethical integrity. Developed from key insights from related studies, it outlines potential practice in questionnaire design, item development and the importance…
Descriptors: Guidelines, Test Construction, Questionnaires, Surveys
Peer reviewed Peer reviewed
Direct linkDirect link
Leighton, Jacqueline P. – Applied Measurement in Education, 2021
The objective of this paper is to comment on the think-aloud methods presented in the three papers included in this special issue. The commentary offered stems from the author's own psychological investigations of unobservable information processes and the conditions under which the most defensible claims can be advanced. The structure of this…
Descriptors: Protocol Analysis, Data Collection, Test Construction, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Beth Chance; Andrew Kerr; Jett Palmer – Journal of Statistics and Data Science Education, 2024
While many instructors are aware of the "Literary Digest" 1936 poll as an example of biased sampling methods, this article details potential further explorations for the "Digest's" 1924-1936 quadrennial U.S. presidential election polls. Potential activities range from lessons in data acquisition, cleaning, and validation, to…
Descriptors: Publications, Public Opinion, Surveys, Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Bostic, Jonathan David – Applied Measurement in Education, 2021
Think alouds are valuable tools for academicians, test developers, and practitioners as they provide a unique window into a respondent's thinking during an assessment. The purpose of this special issue is to highlight novel ways to use think alouds as a means to gather evidence about respondents' thinking. An intended outcome from this special…
Descriptors: Protocol Analysis, Cognitive Processes, Data Collection, STEM Education
Peer reviewed Peer reviewed
Direct linkDirect link
Ting Zhang; Paul Bailey; Yuqi Liao; Emmanuel Sikali – Large-scale Assessments in Education, 2024
The EdSurvey package helps users download, explore variables in, extract data from, and run analyses on large-scale assessment data. The analysis functions in EdSurvey account for the use of plausible values for test scores, survey sampling weights, and their associated variance estimator. We describe the capabilities of the package in the context…
Descriptors: National Competency Tests, Information Retrieval, Data Collection, Test Validity
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Howard, Jeffrey N. – Practical Assessment, Research & Evaluation, 2022
The Student Evaluation of Teaching (SET) instrument provides insight for instructors and administrators alike, often touting high response-rates to endorse their validity and reliability. However, response-rate alone omits consideration for "adequate quantity of 'observational sampling opportunity' (OSO) data points" (e.g., high student…
Descriptors: Student Evaluation of Teacher Performance, Validity, Reliability, Longitudinal Studies
Pentimonti, J.; Petscher, Y.; Stanley, C. – National Center on Improving Literacy, 2019
Sample representativeness is an important piece to consider when evaluating the quality of a screening assessment. If you are trying to determine whether or not the screening tool accurately measures children's skills, you want to ensure that the sample that is used to validate the tool is representative of your population of interest.
Descriptors: Sampling, Screening Tests, Measurement, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Lottridge, Sue; Burkhardt, Amy; Boyer, Michelle – Educational Measurement: Issues and Practice, 2020
In this digital ITEMS module, Dr. Sue Lottridge, Amy Burkhardt, and Dr. Michelle Boyer provide an overview of automated scoring. Automated scoring is the use of computer algorithms to score unconstrained open-ended test items by mimicking human scoring. The use of automated scoring is increasing in educational assessment programs because it allows…
Descriptors: Computer Assisted Testing, Scoring, Automation, Educational Assessment
Flanagan, Agnes; Cormier, Damien C. – Communique, 2019
One of the areas subsumed under the data-based decision making and accountability practice identified in the National Association of School Psychologists' (NASP) "Model for Integrated School Psychological Services" is to collect information on psychological and educational variables to make decisions at a number of levels of service…
Descriptors: Test Bias, School Psychologists, Measurement, Data Collection
Peer reviewed Peer reviewed
Direct linkDirect link
Ross, Karen; Call-Cummings, Meagan – International Journal of Social Research Methodology, 2019
In this article, we interrogate the concept of methodological 'failures' as they arise during fieldwork, in the process of collecting empirical data. We highlight how the techniques of validity horizon matrices and power analysis can be used as methodological tools to illustrate moments in the fieldwork process where these 'failures' occur and to…
Descriptors: Failure, Data Collection, Research Methodology, Validity
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Marc T. Braverman – Journal of Human Sciences & Extension, 2019
This article examines the concept of credible evidence in Extension evaluations with specific attention to the measures and measurement strategies used to collect and create data. Credibility depends on multiple factors, including data quality and methodological rigor, characteristics of the stakeholder audience, stakeholder beliefs about the…
Descriptors: Extension Education, Program Evaluation, Evaluation Methods, Planning
Peer reviewed Peer reviewed
Direct linkDirect link
Knekta, Eva; Runyon, Christopher; Eddy, Sarah – CBE - Life Sciences Education, 2019
Across all sciences, the quality of measurements is important. Survey measurements are only appropriate for use when researchers have validity evidence within their particular context. Yet, this step is frequently skipped or is not reported in educational research. This article briefly reviews the aspects of validity that researchers should…
Descriptors: Factor Analysis, Surveys, Data Collection, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Schalock, Robert L.; Gomez, Laura E.; Verdugo, Miguel A.; Claes, Claudia – Intellectual and Developmental Disabilities, 2017
The purpose of this article is to move the field of intellectual and closely related developmental disabilities (IDD) towards a better understanding of evidence and evidence-based practices. To that end, we discuss (a) different perspectives on and levels of evidence, (b) commonly used evidence-gathering strategies, (c) standards to evaluate…
Descriptors: Evidence Based Practice, Intellectual Disability, Developmental Disabilities, Data Collection
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7