NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 27 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Pablo Robles-García; Stuart McLean; Jeffrey Stewart; Ji-young Shin; Claudia Helena Sánchez-Gutiérrez – Language Assessment Quarterly, 2024
Recent literature in the field of L2 vocabulary assessment has advocated for the development of written receptive vocabulary tests such as Vocabulary Levels Tests (VLTs) that use: (a) meaning-recall item formats, (b) a minimum of 40 item counts per 1,000-frequency band to improve level estimates, and (c) lemmas (not word-families) as the lexical…
Descriptors: Spanish, Test Validity, Test Construction, Vocabulary Development
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Anani Sarab, Mohammad Reza; Rahmani, Simindokht – International Journal of Language Testing, 2023
Language testing and assessment have grown in popularity and gained significance in the last few decades, and there is a rising need for assessment literate stakeholders in the field of language education. As teachers play a major role in assessing students, there is a need to make sure they have the right level of assessment knowledge and skills…
Descriptors: Language Tests, Literacy, Second Language Learning, Factor Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kosan, Aysen Melek Aytug; Koç, Nizamettin; Elhan, Atilla Halil; Öztuna, Derya – International Journal of Assessment Tools in Education, 2019
Progress Test (PT) is a form of assessment that simultaneously measures ability levels of all students in a certain educational program and their progress over time by providing them with same questions and repeating the process at regular intervals with parallel tests. Our objective was to generate an item bank for the PT and to examine the…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Medical Education
Peer reviewed Peer reviewed
Direct linkDirect link
Mix, Daniel F.; Tao, Shuqin – AERA Online Paper Repository, 2017
Purposes: This study uses think-alouds and cognitive interviews to provide validity evidence for an online formative assessment--i-Ready Standards Mastery (iSM) mini-assessments--which involves a heavy use of innovative items. iSM mini-assessments are intended to help teachers determine student understanding of each of the on-grade-level Common…
Descriptors: Formative Evaluation, Computer Assisted Testing, Test Validity, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Herro, Danielle; Quigley, Cassie; Andrews, Jessica; Delacruz, Girlie – International Journal of STEM Education, 2017
Background: The shortage of skilled workers choosing STEM (Science, Technology, Engineering, and Math) careers in the USA and worldwide has fueled a movement towards STEAM, in which the "A" addresses the arts and humanities. STEAM education has been proposed as a way to offer relevant problems to solve while drawing on creative and…
Descriptors: STEM Education, Art Education, Humanities, Elementary Secondary Education
Peer reviewed Peer reviewed
Direct linkDirect link
Bell, Sherry Mee; McCallum, R. Steve; Ziegler, Mary; Davis, C. A.; Coleman, MariBeth – Annals of Dyslexia, 2013
The purpose of this paper is to describe briefly the development and utility of the "Assessment of Reading Instructional Knowledge-Adults" ("ARIK-A"), the only nationally normed (n?=?468) measure of adult reading instructional knowledge, created to facilitate professional development of adult educators. Developmental data…
Descriptors: Test Construction, Test Validity, Reading Instruction, Adult Education
Peer reviewed Peer reviewed
Direct linkDirect link
Mislevy, Robert J.; Haertel, Geneva; Cheng, Britte H.; Ructtinger, Liliana; DeBarger, Angela; Murray, Elizabeth; Rose, David; Gravel, Jenna; Colker, Alexis M.; Rutstein, Daisy; Vendlinski, Terry – Educational Research and Evaluation, 2013
Standardizing aspects of assessments has long been recognized as a tactic to help make evaluations of examinees fair. It reduces variation in irrelevant aspects of testing procedures that could advantage some examinees and disadvantage others. However, recent attention to making assessment accessible to a more diverse population of students…
Descriptors: Testing Accommodations, Access to Education, Testing, Psychometrics
Anderson, Daniel; Lai, Cheng-Fei; Nese, Joseph F. T.; Park, Bitnara Jasmine; Saez, Leilani; Jamgochian, Elisa; Alonzo, Julie; Tindal, Gerald – Behavioral Research and Teaching, 2010
In the following technical report, we present evidence of the technical adequacy of the easyCBM[R] math measures in grades K-2. In addition to reliability information, we present criterion-related validity evidence, both concurrent and predictive, and construct validity evidence. The results represent data gathered throughout the 2009/2010 school…
Descriptors: Curriculum Based Assessment, Mathematics Tests, Test Reliability, Test Validity
Camara, Wayne – College Board, 2011
This presentation was presented at the 2011 National Conference on Student Assessment (CCSSO). The focus of this presentation is how to validate the common core state standards (CCSS) in math and ELA and the subsequent assessments that will be developed by state consortia. The CCSS specify the skills students need to be ready for post-secondary…
Descriptors: College Readiness, Career Readiness, Benchmarking, Student Evaluation
Nese, Joseph F. T.; Lai, Cheng-Fei; Anderson, Daniel; Jamgochian, Elisa M.; Kamata, Akihito; Saez, Leilani; Park, Bitnara J.; Alonzo, Julie; Tindal, Gerald – Behavioral Research and Teaching, 2010
In this technical report, data are presented on the practical utility, reliability, and validity of the easyCBM[R] mathematics (2009-2010 version) measures for students in grades 3-8 within four districts in two states. Analyses include: minimum acceptable within-year growth; minimum acceptable year-end benchmark performance; internal and…
Descriptors: Curriculum Based Assessment, Mathematics Tests, Test Reliability, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Embretson, Susan E. – Educational Researcher, 2007
Lissitz and Samuelsen (2007) have proposed a framework that seemingly deems construct validity evidence irrelevant to supporting educational test meaning. The author of this article agrees with Lissitz and Samuelsen that internal evidence establishes test meaning, but she argues that construct validity need not be removed from the validity sphere.…
Descriptors: Construct Validity, Test Validity, Evaluation Methods, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Moss, Pamela A. – Educational Researcher, 2007
In response to Lissitz and Samuelsen (2007), the author reconstructs the historical arguments for the more comprehensive unitary concept of validity and the principles of scientific inquiry underlying it. Her response is organized in terms of four questions: (a) How did validity in educational measurement come to be conceptualized as unitary, and…
Descriptors: Evaluators, Construct Validity, Test Validity, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Harlen, Wynne – Studies in Educational Evaluation, 2007
The assessment of students is used for various different purposes within an assessment system. It has an impact on students, teaching and the curriculum, the nature of this impact depending upon how it is carried out. In order to evaluate the advantages and disadvantages of particular assessment procedures, criteria need to be applied. This…
Descriptors: Evaluation Criteria, Student Evaluation, Test Validity, Construct Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Gorin, Joanna S. – Educational Researcher, 2007
Lissitz and Samuelsen (2007) propose a new framework for validity theory and terminology, emphasizing a shift in theory and practice toward issues of test content rather than constructs. The author of this article argues that several of Lissitz and Samuelsen's critiques of validity theory focus on previously considered, but subsequently discarded,…
Descriptors: Test Content, Test Validity, Construct Validity, Test Construction
Bailey, Jennifer; Little, Chelsea; Rigney, Rex; Thaler, Anna; Weiderman, Ken; Yorkovich, Ben – Online Submission, 2010
This handbook is designed as a quick reference for first-year teachers who find themselves in an assessment driven environment with little experience to help make sense of the language, underlying philosophy, or organizational structure of the assessment system. The handbook begins with advice on developing and evaluating effective learning…
Descriptors: Student Evaluation, Portfolio Assessment, Elementary Secondary Education, Performance Based Assessment
Previous Page | Next Page »
Pages: 1  |  2