NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 162 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Harold Doran; Testsuhiro Yamada; Ted Diaz; Emre Gonulates; Vanessa Culver – Journal of Educational Measurement, 2025
Computer adaptive testing (CAT) is an increasingly common mode of test administration offering improved test security, better measurement precision, and the potential for shorter testing experiences. This article presents a new item selection algorithm based on a generalized objective function to support multiple types of testing conditions and…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Algorithms
Peer reviewed Peer reviewed
Direct linkDirect link
Liou, Gloria; Bonner, Cavan V.; Tay, Louis – International Journal of Testing, 2022
With the advent of big data and advances in technology, psychological assessments have become increasingly sophisticated and complex. Nevertheless, traditional psychometric issues concerning the validity, reliability, and measurement bias of such assessments remain fundamental in determining whether score inferences of human attributes are…
Descriptors: Psychometrics, Computer Assisted Testing, Adaptive Testing, Data
Peer reviewed Peer reviewed
Direct linkDirect link
Dombrowski, Stefan C.; Casey, Corinne – Journal of Psychoeducational Assessment, 2022
This article reviews the administrative, scoring, and psychometric properties of the Wechsler Individual Achievement Test, Fourth Edition (WIAT-4, NCS Pearson, 2020). The WIAT-4 is one of the more commonly administered broadband measures of academic achievement. The instrument was determined to be well-conceptualized, and generally…
Descriptors: Achievement Tests, Testing, Scoring, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Yunxiao; Lee, Yi-Hsuan; Li, Xiaoou – Journal of Educational and Behavioral Statistics, 2022
In standardized educational testing, test items are reused in multiple test administrations. To ensure the validity of test scores, the psychometric properties of items should remain unchanged over time. In this article, we consider the sequential monitoring of test items, in particular, the detection of abrupt changes to their psychometric…
Descriptors: Standardized Tests, Test Items, Test Validity, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Barry, Carol L.; Jones, Andrew T.; Ibáñez, Beatriz; Grambau, Marni; Buyske, Jo – Educational Measurement: Issues and Practice, 2022
In response to the COVID-19 pandemic, the American Board of Surgery (ABS) shifted from in-person to remote administrations of the oral certifying exam (CE). Although the overall exam architecture remains the same, there are a number of differences in administration and staffing costs, exam content, security concerns, and the tools used to give the…
Descriptors: COVID-19, Pandemics, Computer Assisted Testing, Verbal Tests
Elizabeth Talbott; Andres De Los Reyes; Devin M. Kearns; Jeannette Mancilla-Martinez; Mo Wang – Exceptional Children, 2023
Evidence-based assessment (EBA) requires that investigators employ scientific theories and research findings to guide decisions about what domains to measure, how and when to measure them, and how to make decisions and interpret results. To implement EBA, investigators need high-quality assessment tools along with evidence-based processes. We…
Descriptors: Evidence Based Practice, Evaluation Methods, Special Education, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Sadeghi, Karim; Abolfazli Khonbi, Zainab – Language Testing in Asia, 2017
As perfectly summarised by Ida Lawrence, "Testing is growing by leaps and bounds across the world. There is a realization that a nation's well-being depends crucially on the educational achievement of its population. Valid tests are an essential tool to evaluate a nation's educational standing and to implement efficacious educational reforms.…
Descriptors: Test Items, Item Response Theory, Computer Assisted Testing, Adaptive Testing
Nebraska Department of Education, 2022
In Winter 2021-2022, the Nebraska Student-Centered Assessment System (NSCAS) assessments are administered in ELA and mathematics in Grades 3-8. In Spring 2021-2022, the NSCAS assessments are administered in English language arts (ELA) and mathematics in Grades 3-8 and in science in Grades 5 and 8. The purposes of the NSCAS assessments are to…
Descriptors: English, Language Arts, Student Centered Learning, Mathematics Tests
Hsueh, JoAnn; Portilla, Ximena; McCormick, Meghan; Balu, Rekha; Najafi, Behnosh – MDRC, 2022
The Measures for Early Success Initiative aims to reimagine the landscape of early learning assessments for the millions of 3- to 5-year-olds enrolled in Pre-K, so that more equitable data can be applied to meaningfully support and strengthen early learning experiences for all young children. This document outlines design parameters for child…
Descriptors: Early Childhood Education, Preschool Children, Student Evaluation, Child Development
Nebraska Department of Education, 2023
In Fall and Winter 2022-2023, the NSCAS assessments were administered in ELA and mathematics for grades 3-8. In Spring 2022-2023, the NSCAS assessments were administered in English language arts (ELA) and mathematics for grades 3-8 and in science for grades 5 and 8. The purposes of the NSCAS assessments are to measure and report Nebraska students'…
Descriptors: English, Language Arts, Student Centered Learning, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Lottridge, Sue; Burkhardt, Amy; Boyer, Michelle – Educational Measurement: Issues and Practice, 2020
In this digital ITEMS module, Dr. Sue Lottridge, Amy Burkhardt, and Dr. Michelle Boyer provide an overview of automated scoring. Automated scoring is the use of computer algorithms to score unconstrained open-ended test items by mimicking human scoring. The use of automated scoring is increasing in educational assessment programs because it allows…
Descriptors: Computer Assisted Testing, Scoring, Automation, Educational Assessment
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rotou, Ourania; Rupp, André A. – ETS Research Report Series, 2020
This research report provides a description of the processes of evaluating the "deployability" of automated scoring (AS) systems from the perspective of large-scale educational assessments in operational settings. It discusses a comprehensive psychometric evaluation that entails analyses that take into consideration the specific purpose…
Descriptors: Computer Assisted Testing, Scoring, Educational Assessment, Psychometrics
Kate Mahoney – Multilingual Matters, 2024
This book is a comprehensive introduction to the topic of assessing students who use two or more languages in their daily life. The book provides foundational information for assessing multilingual learners (MLs) in schools, with an emphasis on school language and content. Major assessment ideas are viewed through a framework called PUMI (Purpose,…
Descriptors: Multilingualism, English Language Learners, Student Evaluation, Code Switching (Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Heritage, Margaret; Kingston, Neal M. – Journal of Educational Measurement, 2019
Classroom assessment and large-scale assessment have, for the most part, existed in mutual isolation. Some experts have felt this is for the best and others have been concerned that the schism limits the potential contribution of both forms of assessment. Margaret Heritage has long been a champion of best practices in classroom assessment. Neal…
Descriptors: Measurement, Psychometrics, Context Effect, Classroom Environment
Peer reviewed Peer reviewed
Direct linkDirect link
von Davier, Matthias; Khorramdel, Lale; He, Qiwei; Shin, Hyo Jeong; Chen, Haiwen – Journal of Educational and Behavioral Statistics, 2019
International large-scale assessments (ILSAs) transitioned from paper-based assessments to computer-based assessments (CBAs) facilitating the use of new item types and more effective data collection tools. This allows implementation of more complex test designs and to collect process and response time (RT) data. These new data types can be used to…
Descriptors: International Assessment, Computer Assisted Testing, Psychometrics, Item Response Theory
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11