NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Elementary and Secondary…2
What Works Clearinghouse Rating
Showing 1 to 15 of 57 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Vahe Permzadian; Kit W. Cho – Teaching in Higher Education, 2025
When administering an in-class exam, a common decision that confronts every instructor is whether the exam format should be closed book or open book. The present review synthesizes research examining the effect of administering closed-book or open-book assessments on long-term learning. Although the overall effect of assessment format on learning…
Descriptors: College Students, Tests, Test Format, Long Term Memory
Peer reviewed Peer reviewed
Direct linkDirect link
Darr, Charles – set: Research Information for Teachers, 2022
Approaching assessment from an end-to-end perspective helps with appropriate assessment selection, design, and practice. Thinking in terms of end-to-end reminds teachers to keep in sight the aspirations (or ends) associated with any assessment activity. It also helps teachers to consider how to integrate assessment activities and capabilities into…
Descriptors: Student Evaluation, Evaluation Methods, Test Selection, Standards
Peer reviewed Peer reviewed
Direct linkDirect link
Mark White; Peter A. Edelsbrunner; Christian M. Thurn – Assessment in Education: Principles, Policy & Practice, 2024
Classroom observation rubrics are a widely adopted tool for measuring the quality of teaching and provide stable conceptualisations of teaching quality that facilitate empirical research. Here, we present four statistical approaches for analysing data from classroom observations: Factor analysis, Rasch modelling, latent class or profile analysis,…
Descriptors: Teacher Effectiveness, Classroom Observation Techniques, Evaluation Methods, Scoring Rubrics
Peer reviewed Peer reviewed
Direct linkDirect link
Kiri Mealings; Kelly Miles; Joerg M. Buchholz – Journal of Speech, Language, and Hearing Research, 2024
Purpose: Listening is the gateway to learning in the mainstream classroom; however, classrooms are noisy environments, making listening challenging. Therefore, speech-in-noise tests that realistically incorporate the complexity of the classroom listening environment are needed. The aim of this article was to review the speech stimuli, noise…
Descriptors: Literature Reviews, Meta Analysis, Speech Communication, Acoustics
Peer reviewed Peer reviewed
Direct linkDirect link
Eirini M. Mitropoulou; Leonidas A. Zampetakis; Ioannis Tsaousis – Evaluation Review, 2024
Unfolding item response theory (IRT) models are important alternatives to dominance IRT models in describing the response processes on self-report tests. Their usage is common in personality measures, since they indicate potential differentiations in test score interpretation. This paper aims to gain a better insight into the structure of trait…
Descriptors: Foreign Countries, Adults, Item Response Theory, Personality Traits
Peer reviewed Peer reviewed
Direct linkDirect link
Christopher L. Payten; Kelly A. Weir; Catherine J. Madill – International Journal of Language & Communication Disorders, 2024
Background: Published best-practice guidelines and standardized protocols for voice assessment recommend multidisciplinary evaluation utilizing a comprehensive range of clinical measures. Previous studies report variations in assessment practices when compared with these guidelines. Aims: To provide an up-to-date evaluation of current global…
Descriptors: Voice Disorders, Speech Language Pathology, Allied Health Personnel, Auditory Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Stephen G. Sireci; Javier Suárez-Álvarez; April L. Zenisky; Maria Elena Oliveri – Educational Measurement: Issues and Practice, 2024
The goal in personalized assessment is to best fit the needs of each individual test taker, given the assessment purposes. Design-in-Real-Time (DIRTy) assessment reflects the progressive evolution in testing from a single test, to an adaptive test, to an adaptive assessment "system." In this article, we lay the foundation for DIRTy…
Descriptors: Educational Assessment, Student Needs, Test Format, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Singh, Upasana Gitanjali; de Villiers, Mary Ruth – International Review of Research in Open and Distributed Learning, 2017
e-Assessment, in the form of tools and systems that deliver and administer multiple choice questions (MCQs), is used increasingly, raising the need for evaluation and validation of such systems. This research uses literature and a series of six empirical action research studies to develop an evaluation framework of categories and criteria called…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Test Selection, Action Research
Peer reviewed Peer reviewed
Direct linkDirect link
Eliasson, Ann-Christin – Physical & Occupational Therapy in Pediatrics, 2012
Assessments used for both clinical practice and research should show evidence of validity and reliability for the target group of people. It is easy to agree with this statement, but it is not always easy to choose the right assessment for the right purpose. Recently there have been increasing numbers of studies which investigate further the…
Descriptors: Psychometrics, Test Construction, Test Reliability, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Friberg, Jennifer C. – Child Language Teaching and Therapy, 2010
Nine preschool and school-age language assessment tools found to have acceptable levels of identification accuracy were evaluated to determine their overall levels of psychometric validity for use in diagnosing the presence/absence of language impairment. Eleven specific criteria based on those initially devised by McCauley and Swisher (1984) were…
Descriptors: Test Selection, Language Impairments, Test Validity, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Papay, John P. – American Educational Research Journal, 2011
Recently, educational researchers and practitioners have turned to value-added models to evaluate teacher performance. Although value-added estimates depend on the assessment used to measure student achievement, the importance of outcome selection has received scant attention in the literature. Using data from a large, urban school district, I…
Descriptors: Urban Schools, Teacher Effectiveness, Reading Achievement, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Penner-Williams, Janet; Smith, Tom E. C.; Gartin, Barbara C. – Assessment for Effective Intervention, 2009
Written language is a complex set of skills that facilitate communication and that are developed in a predictable sequence. It is therefore possible to analyze current skills, identify deficits, plan interventions, and determine the effectiveness of the intervention. To effectively accomplish these tasks, educators need to choose appropriate…
Descriptors: Student Evaluation, Written Language, Evaluation Methods, Evaluation Criteria
Peer reviewed Peer reviewed
Direct linkDirect link
Miller, Lynda – Assessment for Effective Intervention, 2009
While formal, standardized assessment instruments provide valuable and necessary information about students' various abilities and skills, the use of informal and qualitative assessment approaches has the benefit of leading directly to instruction based squarely on an individual student's needs, strengths, and existing skills. This article…
Descriptors: Writing Improvement, Grade 6, Elementary Secondary Education, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
McEnrue, Mary Pat; Groves, Kevin – Human Resource Development Quarterly, 2006
This article provides a comprehensive review of research regarding five types of validity for each of four major tests used to measure emotional intelligence (EI). It culls and synthesizes information scattered among a host of articles in academic journals, technical reports, chapters, and books, as well as unpublished papers and manuscripts. It…
Descriptors: Human Resources, Emotional Intelligence, Intelligence Tests, Validity
Salend, Spencer S. – Pointer, 1984
Assessment instruments should be selected on the basis of test design variables (such as date of development and content sequence), test construction variables (including standardization results), examinee-related variables (such as prerequisite skills and vocabulary level), examiner-related variables (including preparation and skills), and…
Descriptors: Elementary Secondary Education, Evaluation Methods, Standardized Tests, Test Selection
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4