NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Assessments and Surveys
Program for International…1
What Works Clearinghouse Rating
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Teck Kiang Tan – Practical Assessment, Research & Evaluation, 2024
The procedures of carrying out factorial invariance to validate a construct were well developed to ensure the reliability of the construct that can be used across groups for comparison and analysis, yet mainly restricted to the frequentist approach. This motivates an update to incorporate the growing Bayesian approach for carrying out the Bayesian…
Descriptors: Bayesian Statistics, Factor Analysis, Programming Languages, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Rutten, Roel – Sociological Methods & Research, 2022
Applying qualitative comparative analysis (QCA) to large Ns relaxes researchers' case-based knowledge. This is problematic because causality in QCA is inferred from a dialogue between empirical, theoretical, and case-based knowledge. The lack of case-based knowledge may be remedied by various robustness tests. However, being a case-based method,…
Descriptors: Comparative Analysis, Correlation, Case Studies, Attribution Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Behizadeh, Nadia – Educational Researcher, 2014
The dangers of a single story in current U.S. large-scale writing assessment are that assessment practice does not align with theory and this practice has negative effects on instruction and students. In this article, I analyze the connections or lack of connections among writing theory, writing assessment, and writing instruction, critique the…
Descriptors: Writing Instruction, Writing Evaluation, Kindergarten, Elementary Secondary Education
Peer reviewed Peer reviewed
Direct linkDirect link
Coe, Robert – Research Papers in Education, 2010
Much of the argument about comparability of examination standards is at cross-purposes; contradictory positions are in fact often both defensible, but they are using the same words to mean different things. To clarify this, two broad conceptualisations of standards can be identified. One sees the standard in the observed phenomena of performance…
Descriptors: Foreign Countries, Tests, Evaluation Methods, Standards
Peer reviewed Peer reviewed
Direct linkDirect link
Newton, Paul E. – Research Papers in Education, 2010
Robert Coe has claimed that three broad conceptions of comparability can be identified from the literature: performance, statistical and conventional. Each of these he rejected, in favour of a single, integrated conception which relies upon the notion of a "linking construct" and which he termed "construct comparability".…
Descriptors: Psychometrics, Measurement Techniques, Foreign Countries, Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Sherman, Jeffrey W.; Gawronski, Bertram; Gonsalkorale, Karen; Hugenberg, Kurt; Allen, Thomas J.; Groom, Carla J. – Psychological Review, 2008
The distinction between automatic processes and controlled processes is a central organizational theme across areas of psychology. However, this dichotomy conceals important differences among qualitatively different processes that independently contribute to ongoing behavior. The Quadruple process model is a multinomial model that provides…
Descriptors: Response Style (Tests), Psychology, Responses, Models
Schulz, Wolfram – 2003
One of the most salient requirements for international educational research is the use of comparable measures. For the comparison of student performance across countries the use of item response theory (IRT) scaling techniques facilitates the collection of cross-nationally comparable measures. But there is also a need for valid and comparable…
Descriptors: Comparative Analysis, Construct Validity, Cross Cultural Studies, Educational Research
Peer reviewed Peer reviewed
Sandals, Lauran H. – Canadian Journal of Educational Communication, 1992
Presents an overview of the applications of microcomputer-based assessment and diagnosis for both educational and psychological placement and interventions. Advantages of computer-based assessment (CBA) over paper-based testing practices are described, the history of computer testing is reviewed, and the construct validity of computer-based tests…
Descriptors: Comparative Analysis, Computer Assisted Testing, Construct Validity, Educational Testing
Peer reviewed Peer reviewed
Kim, Young Hwan – International Journal of Educational Technology, 1999
Describes formative research methodology and how it can be used for procedural instructional design theory. Compares formative research with formative evaluation for instructional products and discusses objectivity, trustworthiness, construct validity, and the need for greater use of formative research methodology. (Contains 32 references.)…
Descriptors: Comparative Analysis, Construct Validity, Credibility, Formative Evaluation
Peer reviewed Peer reviewed
Thomas, L. Todd; Levine, Timothy R. – Human Communication Research, 1994
Examines 73 participants on the relationship between verbal recall and listening, using measures of listening behavior as a criterion. Considers three possible models: (1) isomorphic, (2) confounding, and (3) recall ability as antecedent to listening. Finds that a model stipulating verbal recall ability as antecedent to listening provides the best…
Descriptors: Communication Research, Comparative Analysis, Construct Validity, Higher Education
Peer reviewed Peer reviewed
Direct linkDirect link
Vartiainen, Pirkko – Higher Education in Europe, 2005
This article analyses institutional evaluations of higher education in England and Finland through the concept of legitimacy. The focus of the article is on the institutional tendencies of legitimacy. This author's hypothesis is that evaluation is legitimate when the evaluation process is of a good quality and accepted both morally and in practice…
Descriptors: Institutional Evaluation, Higher Education, Foreign Countries, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Cameron, Brian D. – portal: Libraries and the Academy, 2005
Librarians rely on the Institute for Scientific Information's journal impact factor as a tool for selecting periodicals, primarily in scientific disciplines. A current trend is to use this data as a means for evaluating the performance of departments, institutions, and even researchers in academic institutions--a process that is now being tied to…
Descriptors: Trend Analysis, Bibliometrics, Item Analysis, Construct Validity
Peer reviewed Peer reviewed
Glover, Anne; Black-Gutman, Dasia – Australian Journal of Early Childhood, 1996
Addresses the multidimensional nature of cross-cultural research along with challenges of cross-cultural, comparative studies. Focuses on validity of constructs, appropriateness of measures, and suitability of methodologies. Discusses three studies accommodating the cross-cultural context and a study conducted in indigenous Australian communities…
Descriptors: Childhood Attitudes, Comparative Analysis, Construct Validity, Cross Cultural Studies