Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 5 |
Descriptor
Source
ACT, Inc. | 2 |
ETS Research Report Series | 1 |
Grantee Submission | 1 |
Journal of Educational… | 1 |
Online Submission | 1 |
Reading in a Foreign Language | 1 |
Author
Ben Seipel | 1 |
Bodmann, Shawn M. | 1 |
Cho, YoungWoo | 1 |
Handwerk, Phil | 1 |
Harris, Deborah | 1 |
Li, Dongmei | 1 |
Lu, Ying | 1 |
Mark L. Davison | 1 |
Matthew P. Wilcox | 1 |
Neil J. Anderson | 1 |
Pashley, Peter | 1 |
More ▼ |
Publication Type
Numerical/Quantitative Data | 7 |
Reports - Research | 6 |
Journal Articles | 3 |
Reports - Evaluative | 1 |
Speeches/Meeting Papers | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 7 |
Postsecondary Education | 4 |
High Schools | 1 |
Secondary Education | 1 |
Audience
Location
Hawaii | 1 |
Laws, Policies, & Programs
Assessments and Surveys
ACT Assessment | 2 |
California Achievement Tests | 1 |
Medical College Admission Test | 1 |
National Merit Scholarship… | 1 |
Preliminary Scholastic… | 1 |
What Works Clearinghouse Rating
Steven J. Carter; Matthew P. Wilcox; Neil J. Anderson – Reading in a Foreign Language, 2023
This research presents a novel reading fluency (rf) measurement formula that accounts for both reading rate and comprehension. Possible formulas were investigated with 68 participants in a strategic reading course in an IEP at a small Pacific Island university. The selected formula's scores demonstrated concurrent validity through strong…
Descriptors: Second Language Learning, Silent Reading, Reading Fluency, Reading Comprehension
Ben Seipel; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison; Patrick C. Kennedy – Grantee Submission, 2022
Originally designed for students in Grades 3 through 5, MOCCA (formerly the Multiple-choice Online Causal Comprehension Assessment), identifies students who struggle with comprehension, and helps uncover why they struggle. There are many reasons why students might not comprehend what they read. They may struggle with decoding, or reading words…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Diagnostic Tests, Reading Tests
Steedle, Jeffrey; Pashley, Peter; Cho, YoungWoo – ACT, Inc., 2020
Three mode comparability studies were conducted on the following Saturday national ACT test dates: October 26, 2019, December 14, 2019, and February 8, 2020. The primary goal of these studies was to evaluate whether ACT scores exhibited mode effects between paper and online testing that would necessitate statistical adjustments to the online…
Descriptors: Test Format, Computer Assisted Testing, College Entrance Examinations, Scores
Li, Dongmei; Yi, Qing; Harris, Deborah – ACT, Inc., 2017
In preparation for online administration of the ACT® test, ACT conducted studies to examine the comparability of scores between online and paper administrations, including a timing study in fall 2013, a mode comparability study in spring 2014, and a second mode comparability study in spring 2015. This report presents major findings from these…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Comparative Analysis, Test Format
Bodmann, Shawn M.; Robinson, Daniel H. – Journal of Educational Computing Research, 2004
This study investigated the effect of several different modes of test administration on scores and completion times. In Experiment 1, paper-based assessment was compared to computer-based assessment. Undergraduates completed the computer-based assessment faster than the paper-based assessment, with no difference in scores. Experiment 2 assessed…
Descriptors: Computer Assisted Testing, Higher Education, Undergraduate Students, Evaluation Methods
Handwerk, Phil – ETS Research Report Series, 2007
Online high schools are growing significantly in number, popularity, and function. However, little empirical data has been published about the effectiveness of these institutions. This research examined the frequency of group work and extended essay writing among online Advanced Placement Program® (AP®) students, and how these tasks may have…
Descriptors: Advanced Placement Programs, Advanced Placement, Computer Assisted Testing, Models
Rizavi, Saba; Way, Walter D.; Lu, Ying; Pitoniak, Mary; Steffen, Manfred – Online Submission, 2004
The purpose of this study was to use realistically simulated data to evaluate various CAT designs for use with the verbal reasoning measure of the Medical College Admissions Test (MCAT). Factors such as item pool depth, content constraints, and item formats often cause repeated adaptive administrations of an item at ability levels that are not…
Descriptors: Test Items, Test Bias, Item Banks, College Admission