NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Jiyeo Yun – English Teaching, 2023
Studies on automatic scoring systems in writing assessments have also evaluated the relationship between human and machine scores for the reliability of automated essay scoring systems. This study investigated the magnitudes of indices for inter-rater agreement and discrepancy, especially regarding human and machine scoring, in writing assessment.…
Descriptors: Meta Analysis, Interrater Reliability, Essays, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Marinho, Nathalie L.; Witmer, Sara E.; Jess, Nicole; Roschmann, Sarina – Language Assessment Quarterly, 2023
The use of accommodations is often recommended to remove barriers to academic testing among English Learners (ELs). However, it is unclear whether accommodations are particularly effective at improving ELs' test scores. A growing foundation of empirical work has explored this topic. We conducted a meta-analysis that examined several possible…
Descriptors: English Language Learners, Testing Accommodations, Barriers, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Goertler, Senta; Gacs, Adam – Unterrichtspraxis/Teaching German, 2018
As online educational programs and courses increase (Allen & Seaman, [Allen, I. E., 2014]), it is important to understand the benefits and limitations of this delivery format when assessing students and when comparing learning outcomes. This article addresses the following two questions: (1) What are some of the best practices in assessing…
Descriptors: Online Courses, Second Language Instruction, Second Language Learning, German
Peer reviewed Peer reviewed
Direct linkDirect link
Kingston, Neal M. – Applied Measurement in Education, 2009
There have been many studies of the comparability of computer-administered and paper-administered tests. Not surprisingly (given the variety of measurement and statistical sampling issues that can affect any one study) the results of such studies have not always been consistent. Moreover, the quality of computer-based test administration systems…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Printed Materials, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Shudong; Jiao, Hong; Young, Michael J.; Brooks, Thomas; Olson, John – Educational and Psychological Measurement, 2008
In recent years, computer-based testing (CBT) has grown in popularity, is increasingly being implemented across the United States, and will likely become the primary mode for delivering tests in the future. Although CBT offers many advantages over traditional paper-and-pencil testing, assessment experts, researchers, practitioners, and users have…
Descriptors: Elementary Secondary Education, Reading Achievement, Computer Assisted Testing, Comparative Analysis
Bergstrom, Betty A. – 1992
This paper reports on existing studies and uses meta analysis to compare and synthesize the results of 20 studies from 8 research reports comparing the ability measure equivalence of computer adaptive tests (CAT) and conventional paper and pencil tests. Using the research synthesis techniques developed by Hedges and Olkin (1985), it is possible to…
Descriptors: Ability, Adaptive Testing, Comparative Analysis, Computer Assisted Testing
Thompson, Bruce; Melancon, Janet G. – 1990
Effect sizes have been increasingly emphasized in research as more researchers have recognized that: (1) all parametric analyses (t-tests, analyses of variance, etc.) are correlational; (2) effect sizes have played an important role in meta-analytic work; and (3) statistical significance testing is limited in its capacity to inform scientific…
Descriptors: Comparative Analysis, Computer Assisted Testing, Correlation, Effect Size