Publication Date
| In 2026 | 2 |
| Since 2025 | 188 |
| Since 2022 (last 5 years) | 1067 |
| Since 2017 (last 10 years) | 2889 |
| Since 2007 (last 20 years) | 6174 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Teachers | 480 |
| Practitioners | 358 |
| Researchers | 152 |
| Administrators | 122 |
| Policymakers | 51 |
| Students | 44 |
| Parents | 32 |
| Counselors | 25 |
| Community | 15 |
| Media Staff | 5 |
| Support Staff | 3 |
| More ▼ | |
Location
| Australia | 183 |
| Turkey | 157 |
| California | 133 |
| Canada | 124 |
| New York | 118 |
| United States | 112 |
| Florida | 107 |
| China | 103 |
| Texas | 72 |
| United Kingdom | 72 |
| Japan | 70 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 5 |
| Meets WWC Standards with or without Reservations | 11 |
| Does not meet standards | 8 |
Multiple Choice and True/False Tests: Reliability Measures and Some Implications of Negative Marking
Burton, Richard F. – Assessment & Evaluation in Higher Education, 2004
The standard error of measurement usefully provides confidence limits for scores in a given test, but is it possible to quantify the reliability of a test with just a single number that allows comparison of tests of different format? Reliability coefficients do not do this, being dependent on the spread of examinee attainment. Better in this…
Descriptors: Multiple Choice Tests, Error of Measurement, Test Reliability, Test Items
Fenna, Doug S. – European Journal of Engineering Education, 2004
Multiple-choice testing (MCT) has several advantages which are becoming more relevant in the current financial climate. In particular, they can be machine marked. As an objective testing method it is particularly relevant to engineering and other factual courses, but MCTs are not widely used in engineering because students can benefit from…
Descriptors: Guessing (Tests), Testing, Multiple Choice Tests, Engineering Education
Cowley, Brian J.; Lindgren, Ann; Langdon, David – InSight: A Collection of Faculty Scholarship, 2006
Critical thinking is often absent from classroom endeavor because it is hard to define (Gelder, 2005) or is difficult to assess (Bissell & Lemons, 2006). Critical thinking is defined as application, analysis, synthesis, and evaluation (Browne & Minnick, 2005). This paper shows how self-experimentation and single-subject methodology can be used to…
Descriptors: Critical Thinking, Thinking Skills, Assignments, Teaching Methods
Peer reviewedJohnson, Lynn V. – Journal of Physical Education, Recreation & Dance (JOPERD), 2005
Physical educators are facing increasing pressure to demonstrate accountability through student assessment. At the same time, they struggle to find ways to efficiently and effectively assess their students and to record and track the data. On a daily basis, they are confronted with a variety of barriers relating to student assessment, such as high…
Descriptors: Physical Education Teachers, Physical Education, Student Evaluation, Accountability
Riccomini, Paul J.; Stecker, Pamela M. – Journal of Special Education Technology, 2005
Two types of independent practice activities to improve accuracy of pre-service teachers' measurement of oral reading fluency (ORF) were contrasted. Forty pre-service teachers, enrolled in an introductory special education course, received instructor-delivered classroom instruction on measuring ORF. After lecture and guided practice, participants…
Descriptors: Oral Reading, Educational Technology, Reading Fluency, Preservice Teachers
Lindblom-ylanne, Sari; Pihlajamaki, Heikki; Kotkas, Toomas – Active Learning in Higher Education: The Journal of the Institute for Learning and Teaching, 2006
This study focuses on comparing the results of self-, peer- and teacher-assessment of student essays, as well as on exploring students' experiences of the self- and peer-assessment processes. Participants were 15 law students. The scoring matrix used in the study made assessment easy, according to both teachers and students alike. Self-assessment…
Descriptors: Self Evaluation (Individuals), Peer Evaluation, Teacher Evaluation, Essays
Hopwood, Christopher J.; Richard, David C. S. – Assessment, 2005
Research on the Wechsler Adult Intelligence Scale-Revised and Wechsler Adult Intelligence Scale-Third Edition (WAIS-III) suggests that practicing clinical psychologists and graduate students make item-level scoring errors that affect IQ, index, and subtest scores. Studies have been limited in that Full-Scale IQ (FSIQ) and examiner administration,…
Descriptors: Scoring, Psychologists, Intelligence Quotient, Graduate Students
Evans, Sam; Daniel, Tabitha; Mikovch, Alice; Metze, Leroy; Norman, Antony – Journal of Technology and Teacher Education, 2006
If colleges of education are going to successfully prepare teacher candidates to meet NETS-T standards (Kelly, 2002), then teacher education programs must begin developing strategies to assess technology competencies of beginning college students. Colleges must then move beyond these assessments to providing student support for achieving…
Descriptors: Preservice Teachers, Preservice Teacher Education, Teacher Education Programs, Technology Integration
Blattner, Nancy H.; Frazier, Christina L. – Assessment Update, 2004
Three core learning objectives in the University Studies (general education) curriculum at Southeast Missouri State University were targeted for assessment: the ability to locate and gather information; the ability to think, reason, and analyze critically; and the ability to communicate effectively. The Assessment Committee, an interdisciplinary…
Descriptors: Educational Objectives, General Education, Critical Thinking, Higher Education
James, Cindy L. – Assessing Writing, 2006
How do scores from writing samples generated by computerized essay scorers compare to those generated by ''untrained'' human scorers and what combination of scores, if any, is more accurate at placing students in composition courses? This study endeavored to answer this two-part question by evaluating the correspondence between writing sample…
Descriptors: Writing (Composition), Predictive Validity, Scoring, Validity
Anastas, Jeane W. – Research on Social Work Practice, 2004
Qualitative evaluation studies can differ markedly from quantitative ones in both purpose and method and therefore must be understood and evaluated on their own terms. This article defines qualitative evaluation research and describes key parameters of quality to be considered when conducting and evaluating these studies in terms that take their…
Descriptors: Evaluation Research, Prior Learning, Qualitative Research, Epistemology
Iceland, John – Measurement: Interdisciplinary Research and Perspectives, 2005
This article discusses the theoretical underpinnings of different types of income poverty measures--absolute, relative, and a National Academy of Sciences (NAS) "quasi-relative" one--and empirically assesses them by tracking their performance over time and across demographic groups. Part of the assessment involves comparing these measures to…
Descriptors: Poverty, Income, Poverty Programs, Social Indicators
Hafner, John C.; Hafner, Patti M. – International Journal of Science Education, 2003
Although the rubric has emerged as one of the most popular assessment tools in progressive educational programs, there is an unfortunate dearth of information in the literature quantifying the actual effectiveness of the rubric as an assessment tool "in the hands of the students." This study focuses on the validity and reliability of the rubric as…
Descriptors: Interrater Reliability, Generalizability Theory, Biology, Scoring Rubrics
Wiggins, Grant; McTighe, Jay – Association for Supervision and Curriculum Development, 2007
An essential part of moving forward with the Understanding by Design[R] framework is to make sure its principles and strategies are reflected in all aspects of your school improvement efforts, including curriculum planning, leadership, teacher professional development, and action research. Here's a book designed to help you. From creating your…
Descriptors: Curriculum Development, Action Research, Educational Change, Leadership
National Assessment Governing Board, 2007
The "Science Assessment and Item Specifications for the 2009 NAEP (National Assessment of Educational Progress)" (the "Specifications") translates the "Science Framework for the 2009 NAEP" (the "Framework") into guidelines for developing items and for developing the assessment as a whole. The primary purpose of the "Specifications" is to provide…
Descriptors: Student Evaluation, Science Tests, National Competency Tests, Position Papers

Direct link
