NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 181 to 195 of 510 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lottridge, Susan; Wood, Scott; Shaw, Dan – Applied Measurement in Education, 2018
This study sought to provide a framework for evaluating machine score-ability of items using a new score-ability rating scale, and to determine the extent to which ratings were predictive of observed automated scoring performance. The study listed and described a set of factors that are thought to influence machine score-ability; these factors…
Descriptors: Program Effectiveness, Computer Assisted Testing, Test Scoring Machines, Scoring
Mullis, Ina V. S., Ed.; Martin, Michael O., Ed.; von Davier, Matthias, Ed. – International Association for the Evaluation of Educational Achievement, 2021
TIMSS (Trends in International Mathematics and Science Study) is a long-standing international assessment of mathematics and science at the fourth and eighth grades that has been collecting trend data every four years since 1995. About 70 countries use TIMSS trend data for monitoring the effectiveness of their education systems in a global…
Descriptors: Achievement Tests, International Assessment, Science Achievement, Mathematics Achievement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lee, Hee-Sun; McNamara, Danielle; Bracey, Zoë Buck; Wilson, Christopher; Osborne, Jonathan; Haudek, Kevin C.; Liu, Ou Lydia; Pallant, Amy; Gerard, Libby; Linn, Marcia C.; Sherin, Bruce – Grantee Submission, 2019
Rapid advancements in computing have enabled automatic analyses of written texts created in educational settings. The purpose of this symposium is to survey several applications of computerized text analyses used in the research and development of productive learning environments. Four featured research projects have developed or been working on:…
Descriptors: Computational Linguistics, Written Language, Computer Assisted Testing, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yamamoto, Kentaro; He, Qiwei; Shin, Hyo Jeong; von Davier, Mattias – ETS Research Report Series, 2017
Approximately a third of the Programme for International Student Assessment (PISA) items in the core domains (math, reading, and science) are constructed-response items and require human coding (scoring). This process is time-consuming, expensive, and prone to error as often (a) humans code inconsistently, and (b) coding reliability in…
Descriptors: Foreign Countries, Achievement Tests, International Assessment, Secondary School Students
Michelle M. Neumann; Jason L. Anthony; Noé A. Erazo; David L. Neumann – Grantee Submission, 2019
The framework and tools used for classroom assessment can have significant impacts on teacher practices and student achievement. Getting assessment right is an important component in creating positive learning experiences and academic success. Recent government reports (e.g., United States, Australia) call for the development of systems that use…
Descriptors: Early Childhood Education, Futures (of Society), Educational Assessment, Evaluation Methods
Dallas, Andrew – ProQuest LLC, 2014
This dissertation examined the overall effects of routing and scoring within a computer adaptive multi-stage framework (ca-MST). Testing in a ca-MST environment has become extremely popular in the testing industry. Testing companies enjoy its efficiency benefits as compared to traditionally linear testing and its quality-control features over…
Descriptors: Scoring, Computer Assisted Testing, Adaptive Testing, Item Response Theory
Nebraska Department of Education, 2020
The Spring 2020 Nebraska Student-Centered Assessment System (NSCAS) General Summative testing was cancelled due to COVID-19. This technical report documents the processes and procedures that had been implemented to support the Spring 2020 assessments prior to the cancellation. The following sections are presented in this technical report: (1)…
Descriptors: English, Language Arts, Mathematics Tests, Science Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Guzman-Orth, Danielle A.; Lopez, Alexis A.; Tolentino, Florencia – Language Assessment Quarterly, 2019
The purpose of this study was to create and prototype a dual language assessment task that allows young English learners to use their entire cadre of linguistic resources (language and non-verbal resources) to obtain information about their emergent language abilities. We developed a dual language assessment task in which students described a…
Descriptors: Bilingualism, English Language Learners, Language Tests, Task Analysis
New York State Education Department, 2019
The instructions in this manual explain the responsibilities of school administrators for the New York State Testing Program (NYSTP) Grades 3-8 English Language Arts and Mathematics Tests. School administrators must be thoroughly familiar with the contents of the manual and the policies and procedures must be followed as written so that testing…
Descriptors: Testing Programs, Mathematics Tests, Test Format, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Rupp, André A. – Applied Measurement in Education, 2018
This article discusses critical methodological design decisions for collecting, interpreting, and synthesizing empirical evidence during the design, deployment, and operational quality-control phases for automated scoring systems. The discussion is inspired by work on operational large-scale systems for automated essay scoring but many of the…
Descriptors: Design, Automation, Scoring, Test Scoring Machines
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ramineni, Chaitanya; Williamson, David – ETS Research Report Series, 2018
Notable mean score differences for the "e-rater"® automated scoring engine and for humans for essays from certain demographic groups were observed for the "GRE"® General Test in use before the major revision of 2012, called rGRE. The use of e-rater as a check-score model with discrepancy thresholds prevented an adverse impact…
Descriptors: Scores, Computer Assisted Testing, Test Scoring Machines, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Dickison, Philip; Luo, Xiao; Kim, Doyoung; Woo, Ada; Muntean, William; Bergstrom, Betty – Journal of Applied Testing Technology, 2016
Designing a theory-based assessment with sound psychometric qualities to measure a higher-order cognitive construct is a highly desired yet challenging task for many practitioners. This paper proposes a framework for designing a theory-based assessment to measure a higher-order cognitive construct. This framework results in a modularized yet…
Descriptors: Thinking Skills, Cognitive Tests, Test Construction, Nursing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Nese, Joseph F. T.; Alonzo, Julie; Kamata, Akihito – Grantee Submission, 2016
The purpose of this study was to compare traditional oral reading fluency (ORF) measures to a computerized oral reading evaluation (CORE) system that uses speech recognition software. We applied a mixed model approach with two within-subject variables to test the mean WCPM score differences and the error rates between: passage length (25, 50, 85,…
Descriptors: Text Structure, Oral Reading, Reading Fluency, Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Aviad-Levitzky, Tami; Laufer, Batia; Goldstein, Zahava – Language Assessment Quarterly, 2019
This article describes the development and validation of the new CATSS (Computer Adaptive Test of Size and Strength), which measures vocabulary knowledge in four modalities -- productive recall, receptive recall, productive recognition, and receptive recognition. In the first part of the paper we present the assumptions that underlie the test --…
Descriptors: Foreign Countries, Test Construction, Test Validity, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Gehsmann, Kristin; Spichtig, Alexandra; Tousley, Elias – Literacy Research: Theory, Method, and Practice, 2017
Assessments of developmental spelling, also called spelling inventories, are commonly used to understand students' orthographic knowledge (i.e., knowledge of how written words work) and to determine their stages of spelling and reading development. The information generated by these assessments is used to inform teachers' grouping practices and…
Descriptors: Spelling, Computer Assisted Testing, Grouping (Instructional Purposes), Teaching Methods
Pages: 1  |  ...  |  9  |  10  |  11  |  12  |  13  |  14  |  15  |  16  |  17  |  ...  |  34