NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Teck Kiang Tan – Practical Assessment, Research & Evaluation, 2024
The procedures of carrying out factorial invariance to validate a construct were well developed to ensure the reliability of the construct that can be used across groups for comparison and analysis, yet mainly restricted to the frequentist approach. This motivates an update to incorporate the growing Bayesian approach for carrying out the Bayesian…
Descriptors: Bayesian Statistics, Factor Analysis, Programming Languages, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Demarest, Leila; Langer, Arnim – Sociological Methods & Research, 2022
While conflict event data sets are increasingly used in contemporary conflict research, important concerns persist regarding the quality of the collected data. Such concerns are not necessarily new. Yet, because the methodological debate and evidence on potential errors remains scattered across different subdisciplines of social sciences, there is…
Descriptors: Guidelines, Research Methodology, Conflict, Social Science Research
Peer reviewed Peer reviewed
Direct linkDirect link
Willse, John T. – Measurement and Evaluation in Counseling and Development, 2017
This article provides a brief introduction to the Rasch model. Motivation for using Rasch analyses is provided. Important Rasch model concepts and key aspects of result interpretation are introduced, with major points reinforced using a simulation demonstration. Concrete guidelines are provided regarding sample size and the evaluation of items.
Descriptors: Item Response Theory, Test Results, Test Interpretation, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Longford, Nicholas Tibor – Journal of Educational and Behavioral Statistics, 2016
We address the problem of selecting the best of a set of units based on a criterion variable, when its value is recorded for every unit subject to estimation, measurement, or another source of error. The solution is constructed in a decision-theoretical framework, incorporating the consequences (ramifications) of the various kinds of error that…
Descriptors: Decision Making, Classification, Guidelines, Undergraduate Students
Peer reviewed Peer reviewed
Direct linkDirect link
Yates, Brian T. – New Directions for Evaluation, 2012
The value of a program can be understood as referring not only to outcomes, but also to how those outcomes compare to the types and amounts of resources expended to produce the outcomes. Major potential mistakes and biases in assessing the worth of resources consumed, as well as the value of outcomes produced, are explored. Most of these occur…
Descriptors: Program Evaluation, Cost Effectiveness, Evaluation Criteria, Evaluation Problems
Alonzo, Julie; Liu, Kimy; Tindal, Gerald – Behavioral Research and Teaching, 2007
In this technical report, the authors describe the development and piloting of reading comprehension measures as part of a comprehensive progress monitoring literacy assessment system developed in 2006 for use with students in Kindergarten through fifth grade. They begin with a brief overview of the two conceptual frameworks underlying the…
Descriptors: Reading Comprehension, Emergent Literacy, Test Construction, Literacy Education