NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 5,176 to 5,190 of 10,092 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Leight, Hayley; Saunders, Cheston; Calkins, Robin; Withers, Michelle – CBE - Life Sciences Education, 2012
Collaborative testing has been shown to improve performance but not always content retention. In this study, we investigated whether collaborative testing could improve both performance and content retention in a large, introductory biology course. Students were semirandomly divided into two groups based on their performances on exam 1. Each group…
Descriptors: Individual Testing, Biology, Scoring, Group Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Ng, Eugenia M. W.; Lai, Yiu Chi – Journal of Information Technology Education: Innovations in Practice, 2012
This article discusses an exploratory study that examined whether a wiki-based project could foster student-centered learning. Student teachers were divided into five groups to tackle a group project which involved creating digital learning materials on wiki that could teach information technology (IT) to secondary school students. As assessment…
Descriptors: Student Teachers, Scoring Rubrics, Program Effectiveness, Information Technology
Peer reviewed Peer reviewed
Direct linkDirect link
Unal, Zafer; Bodur, Yasar; Unal, Aslihan – Journal of Information Technology Education: Research, 2012
Current literature provides many examples of rubrics that are used to evaluate the quality of web-quest designs. However, reliability of these rubrics has not yet been researched. This is the first study to fully characterize and assess the reliability of a webquest evaluation rubric. The ZUNAL rubric was created to utilize the strengths of the…
Descriptors: Scoring Rubrics, Test Reliability, Test Construction, Evaluation Criteria
Peer reviewed Peer reviewed
Direct linkDirect link
Nauta, Margaret M. – Journal of Career Development, 2012
A confirmatory factor analysis (CFA) tested the fit of Kelly and Lee's six-factor model of career decision problems among 188 college students. The six-factor model did not fit the data well, but a five-factor (Lack of Information, Need for Information, Trait Indecision, Disagreement with Others, and Choice Anxiety) model did provide a good fit.…
Descriptors: Factor Analysis, Self Efficacy, Career Choice, Performance Factors
Peer reviewed Peer reviewed
Direct linkDirect link
Downer, Jason T.; Lopez, Michael L.; Grimm, Kevin J.; Hamagami, Aki; Pianta, Robert C.; Howes, Carollee – Early Childhood Research Quarterly, 2012
With the rising number of Latino and dual language learner (DLL) children attending pre-k and the importance of assessing the quality of their experiences in those settings, this study examined the extent to which a commonly used assessment of teacher-child interactions, the Classroom Assessment Scoring System (CLASS), demonstrated similar…
Descriptors: Classroom Techniques, School Readiness, Predictive Validity, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Viswanadhan, K. G. – International Education Studies, 2008
National Board of Accreditation (NBA), a body constituted by the All India Council for Technical Education (AICTE) is responsible for the accreditation of Technical education programmes in India. NBA evaluates the performance of engineering programmes quantitatively by assessing 70 variables grouped under a set of 8 predefined criteria, and…
Descriptors: Foreign Countries, Undergraduate Study, Engineering Education, Accreditation (Institutions)
Peer reviewed Peer reviewed
Direct linkDirect link
Miller, Daniel A. – Language Sciences, 2008
The variationist paradigm in sociolinguistics is at a disadvantage when dealing with variables that are traditionally treated qualitatively, e.g., "identity". This study essays to level the accuracy and descriptive value of qualitative research in a quantitative setting by rendering such a variable quantitatively accessible. To this end,…
Descriptors: Qualitative Research, Sociolinguistics, Scoring, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
RiCharde, R. Stephen – Assessment Update, 2008
A persistent conflict between assessment professionals and faculty members in the humanities seems to focus inevitably on resistance to the concept of interrater reliability. While humanities faculty are often willing to engage in course-embedded assessment that uses some type of scoring rubric, when the demand for agreement in scoring is…
Descriptors: Interrater Reliability, Humanities, Scoring Rubrics, College Faculty
DiRanna, Kathy; Osmundson, Ellen; Topps, Jo; Gearhart, Maryl – Principal Leadership, 2008
In this article, the authors describe an assessment-centered teaching (ACT) framework they developed to communicate the integration of assessment and instruction. The framework is a visual representation of the relationships among the fundamental elements of assessment knowledge: the characteristics of quality goals for student learning and…
Descriptors: Student Evaluation, Scoring, Professional Development, Instructional Development
Marzano, Robert J. – Marzano Research, 2009
Learn everything you need to know to implement an integrated system of assessment and grading. The author details the specific benefits of formative assessment and explains how to design and interpret three different types of formative assessments, how to track student progress, and how to assign meaningful grades. Detailed examples bring each…
Descriptors: Formative Evaluation, Academic Standards, Grading, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Secolsky, Charles – Journal of Applied Research in the Community College, 2009
This article describes four measurement tools that are of potential value for institutional researchers as greater demands are being placed upon their work. The author describes scale development, select methods for setting passing scores, validating passing scores and the topic of equating--both equipercentile and linear. Not only should…
Descriptors: Institutional Research, Community Colleges, Researchers, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Boix Mansilla, Veronica; Duraisingh, Elizabeth Dawes; Wolfe, Christopher R.; Haynes, Carolyn – Journal of Higher Education, 2009
In this paper, the authors introduce the "Targeted Assessment Rubric for Interdisciplinary Writing," an empirically-tested instrument designed to assess interdisciplinary writing at the collegiate level. Interdisciplinary writing presents unique challenges to students, calling upon them to mediate the rhetorical, theoretical, and…
Descriptors: Scoring Rubrics, Interdisciplinary Approach, Educational Assessment, College Instruction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Frels, Rebecca K.; Sharma, Bipin; Onwuegbuzie, Anthony J.; Leech, Nancy L.; Stark, Marcella D. – Journal of Effective Teaching, 2011
From the perspective of doctoral students and instructors, we explain a developmental, interactive process based upon the Checklist for Qualitative Data Collection, Data Analysis, and Data Interpretation (Onwuegbuzie, 2010) for students' writing assignments regarding: (a) the application of conceptual knowledge for collecting, analyzing, and…
Descriptors: Graduate Students, Graduate Study, Check Lists, Qualitative Research
Shoemaker, Betty; Lewin, Larry – ASCD, 2011
Get an in-depth understanding of how to create fun, engaging, and challenging performance assessments that require students to elaborate on content and demonstrate mastery of skills. This update of an ASCD (Association for Supervision and Curriculum Development) classic includes new scoring methods, reading assessments, and insights on navigating…
Descriptors: Writing Assignments, Performance Based Assessment, Scoring, Student Evaluation
Graham, Steve; Harris, Karen; Hebert, Michael – Carnegie Corporation of New York, 2011
During this decade there have been numerous efforts to identify instructional practices that improve students' writing. These include "Reading Next" (Biancarosa and Snow, 2004), which provided a set of instructional recommendations for improving writing, and "Writing Next" (Graham and Perin, 2007) and "Writing to Read" (Graham and Hebert, 2010),…
Descriptors: Writing Evaluation, Formative Evaluation, Writing Improvement, Writing Instruction
Pages: 1  |  ...  |  342  |  343  |  344  |  345  |  346  |  347  |  348  |  349  |  350  |  ...  |  673