NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 112 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bailey, Jessica; Marcus, Jill; Gerzon, Nancy; Early-Hersey, Heidi – Regional Educational Laboratory Northeast & Islands, 2020
This self-paced online course provides educators with detailed information on creating and using performance assessments. Through five 30-minute modules, practitioners, instructional leaders, and administrators will learn the foundational concepts of assessment literacy and how to develop, score, and use performance assessments. They will also…
Descriptors: Performance Based Assessment, Test Construction, Test Use, Assessment Literacy
Peer reviewed Peer reviewed
Direct linkDirect link
Attali, Yigal – Educational Measurement: Issues and Practice, 2019
Rater training is an important part of developing and conducting large-scale constructed-response assessments. As part of this process, candidate raters have to pass a certification test to confirm that they are able to score consistently and accurately before they begin scoring operationally. Moreover, many assessment programs require raters to…
Descriptors: Evaluators, Certification, High Stakes Tests, Scoring
Papageorgiou, Spiros; Davis, Larry; Norris, John M.; Garcia Gomez, Pablo; Manna, Venessa F.; Monfils, Lora – Educational Testing Service, 2021
The "TOEFL® Essentials"™ test is a new English language proficiency test in the "TOEFL"® family of assessments. It measures foundational language skills and communication abilities in academic and general (daily life) contexts. The test covers the four language skills of reading, listening, writing, and speaking and is intended…
Descriptors: Language Tests, English (Second Language), Second Language Learning, Language Proficiency
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Oliveri, María Elena; Nastal, Jessica; Slomp, David – ETS Research Report Series, 2020
This report discusses frameworks and assessment development approaches to consider fairness, opportunity to learn, and consequences of test use in the design and use of assessments administered to diverse populations. Examples include the integrated design and appraisal framework and the sociocognitively based evidence-centered design approach.…
Descriptors: Culture Fair Tests, Guidelines, Test Use, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Schmidgall, Jonathan E.; Getman, Edward P.; Zu, Jiyun – Language Testing, 2018
In this study, we define the term "screener test," elaborate key considerations in test design, and describe how to incorporate the concepts of practicality and argument-based validation to drive an evaluation of screener tests for language assessment. A screener test is defined as a brief assessment designed to identify an examinee as a…
Descriptors: Test Validity, Test Use, Test Construction, Language Tests
New Meridian Corporation, 2020
New Meridian Corporation has developed the "Quality Testing Standards and Criteria for Comparability Claims" (QTS) to provide guidance to states that are interested in including New Meridian content and would like to either keep reporting scores on the New Meridian Scale or use the New Meridian performance levels; that is, the state…
Descriptors: Testing, Standards, Comparative Analysis, Test Content
Peer reviewed Peer reviewed
Direct linkDirect link
Wilcox, Bethany R.; Caballero, Marcos D.; Baily, Charles; Sadaghiani, Homeyra; Chasteen, Stephanie V.; Ryan, Qing X.; Pollock, Steven J. – Physical Review Special Topics - Physics Education Research, 2015
The use of validated conceptual assessments alongside conventional course exams to measure student learning in introductory courses has become standard practice in many physics departments. These assessments provide a more standard measure of certain learning goals, allowing for comparisons of student learning across instructors, semesters,…
Descriptors: Student Evaluation, Physics, Tests, Advanced Courses
New York State Education Department, 2018
This technical report provides detailed information regarding the technical, statistical, and measurement attributes of the New York State Testing Program (NYSTP) for the Grades 3-8 English Language Arts (ELA) and Mathematics 2018 Operational Tests. This report includes information about test content and test development, item (i.e., individual…
Descriptors: English, Language Arts, Language Tests, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
International Journal of Testing, 2019
These guidelines describe considerations relevant to the assessment of test takers in or across countries or regions that are linguistically or culturally diverse. The guidelines were developed by a committee of experts to help inform test developers, psychometricians, test users, and test administrators about fairness issues in support of the…
Descriptors: Test Bias, Student Diversity, Cultural Differences, Language Usage
New York State Education Department, 2017
This technical report provides detailed information regarding the technical, statistical, and measurement attributes of the New York State Testing Program (NYSTP) for the Grades 3-8 English Language Arts (ELA) and Mathematics 2017 Operational Tests. This report includes information about test content and test development, item (i.e., individual…
Descriptors: English, Language Arts, Language Tests, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Greathouse, Dan; Shaughnessy, Michael F. – Journal of Psychoeducational Assessment, 2016
Whenever a major intelligence or achievement test is revised, there is always renewed interest in the underlying structure of the test as well as a renewed interest in the scoring, administration, and interpretation changes. In this interview, Amy Gabel discusses the most recent revision of the "Wechsler Intelligence Scale for Children-Fifth…
Descriptors: Children, Intelligence Tests, Test Use, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Huan; Choi, Ikkyu; Schmidgall, Jonathan; Bachman, Lyle F. – Language Testing, 2012
This review departs from current practice in reviewing tests in that it employs an "argument-based approach" to test validation to guide the review (e.g. Bachman, 2005; Kane, 2006; Mislevy, Steinberg, & Almond, 2002). Specifically, it follows an approach to test development and use that Bachman and Palmer (2010) call the process of "assessment…
Descriptors: Evidence, Stakeholders, Test Construction, Test Use
Hambleton, Ronald K. – 1996
The International Test Commission formed a 13-person committee of psychologists representing a number of international organizations to prepare a set of guidelines for adapting educational and psychological tests. The committee has worked for 3 years to produce near final drafts of 22 guidelines organized into 4 categories: (1) context; (2)…
Descriptors: Educational Testing, Psychological Testing, Scoring, Test Construction
Peer reviewed Peer reviewed
Diamond, Esther E.; Fremer, John – Educational Measurement: Issues and Practice, 1989
The Joint Committee on Testing Practices has completed the "Code of Fair Testing Practices in Education," which is meant for the public and focuses on the proper use of tests in education--admissions, educational assessment and diagnosis, and student placement. The Code separately addresses test developers' and users' roles. (SLD)
Descriptors: Educational Testing, Evaluation Utilization, Examiners, Scoring
Myford, Carol M.; And Others – 1996
Developing scoring rubrics to evaluate student work was studied, concentrating on the use of intermediate points in rating scales. How scales that allow for intermediate points between defined categories should be constructed and used was explored. In the recent National Assessment of Educational Progress (NAEP) visual arts field test, researchers…
Descriptors: Evaluators, Rating Scales, Scoring, Scoring Rubrics
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8