Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 9 |
Since 2006 (last 20 years) | 10 |
Descriptor
Source
Grantee Submission | 10 |
Author
Ben Seipel | 2 |
Mark L. Davison | 2 |
Patrick C. Kennedy | 2 |
Sarah E. Carlson | 2 |
Virginia Clinton-Lisell | 2 |
Adam C. Sales | 1 |
Albacete, Patricia | 1 |
Amy Adair | 1 |
Andrew A. McReynolds | 1 |
Ashish Gurung | 1 |
Beigman Klebanov, Beata | 1 |
More ▼ |
Publication Type
Reports - Research | 9 |
Speeches/Meeting Papers | 5 |
Journal Articles | 2 |
Numerical/Quantitative Data | 1 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 4 |
Postsecondary Education | 4 |
Elementary Education | 3 |
Secondary Education | 3 |
High Schools | 2 |
Grade 4 | 1 |
Intermediate Grades | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
ACT Assessment | 2 |
SAT (College Admission Test) | 2 |
Nelson Denny Reading Tests | 1 |
What Works Clearinghouse Rating
Ben Seipel; Patrick C. Kennedy; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison – Grantee Submission, 2022
As access to higher education increases, it is important to monitor students with special needs to facilitate the provision of appropriate resources and support. Although metrics such as ACT's (formerly American College Testing) "reading readiness" provide insight into how many students may need such resources, they do not specify…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Reading Tests, Reading Comprehension
Ashish Gurung; Kirk Vanacore; Andrew A. McReynolds; Korinn S. Ostrow; Eamon S. Worden; Adam C. Sales; Neil T. Heffernan – Grantee Submission, 2024
Learning experience designers consistently balance the trade-off between open and close-ended activities. The growth and scalability of Computer Based Learning Platforms (CBLPs) have only magnified the importance of these design trade-offs. CBLPs often utilize close-ended activities (i.e. Multiple-Choice Questions [MCQs]) due to feasibility…
Descriptors: Multiple Choice Tests, Testing, Test Format, Computer Assisted Testing
Joe Olsen; Amy Adair; Janice Gobert; Michael Sao Pedro; Mariel O'Brien – Grantee Submission, 2022
Many national science frameworks (e.g., Next Generation Science Standards) argue that developing mathematical modeling competencies is critical for students' deep understanding of science. However, science teachers may be unprepared to assess these competencies. We are addressing this need by developing virtual lab performance assessments that…
Descriptors: Mathematical Models, Intelligent Tutoring Systems, Performance Based Assessment, Data Collection
Ben Seipel; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison; Patrick C. Kennedy – Grantee Submission, 2022
Originally designed for students in Grades 3 through 5, MOCCA (formerly the Multiple-choice Online Causal Comprehension Assessment), identifies students who struggle with comprehension, and helps uncover why they struggle. There are many reasons why students might not comprehend what they read. They may struggle with decoding, or reading words…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Diagnostic Tests, Reading Tests
Albacete, Patricia; Silliman, Scott; Jordan, Pamela – Grantee Submission, 2017
Intelligent tutoring systems (ITS), like human tutors, try to adapt to student's knowledge level so that the instruction is tailored to their needs. One aspect of this adaptation relies on the ability to have an understanding of the student's initial knowledge so as to build on it, avoiding teaching what the student already knows and focusing on…
Descriptors: Intelligent Tutoring Systems, Knowledge Level, Multiple Choice Tests, Computer Assisted Testing
Davison, Mark L.; Biancarosa, Gina; Carlson, Sarah E.; Seipel, Ben; Liu, Bowen – Grantee Submission, 2018
The computer-administered Multiple-Choice Online Causal Comprehension Assessment (MOCCA) for Grades 3 to 5 has an innovative, 40-item multiple-choice structure in which each distractor corresponds to a comprehension process upon which poor comprehenders have been shown to rely. This structure requires revised thinking about measurement issues…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Pilot Projects, Measurement
Hardcastle, Joseph; Herrmann-Abell, Cari F.; DeBoer, George E. – Grantee Submission, 2017
Can student performance on computer-based tests (CBT) and paper-and-pencil tests (PPT) be considered equivalent measures of student knowledge? States and school districts are grappling with this question, and although studies addressing this question are growing, additional research is needed. We report on the performance of students who took…
Descriptors: Academic Achievement, Computer Assisted Testing, Comparative Analysis, Student Evaluation
Kosh, Audra E.; Greene, Jeffrey A.; Murphy, P. Karen; Burdick, Hal; Firetto, Carla M.; Elmore, Jeff – Grantee Submission, 2018
We explored the feasibility of using automated scoring to assess upper-elementary students' reading ability through analysis of transcripts of students' small-group discussions about texts. Participants included 35 fourth-grade students across two classrooms that engaged in a literacy intervention called Quality Talk. During the course of one…
Descriptors: Computer Assisted Testing, Small Group Instruction, Group Discussion, Student Evaluation
Burstein, Jill; McCaffrey, Dan; Beigman Klebanov, Beata; Ling, Guangming – Grantee Submission, 2017
No significant body of research examines writing achievement and the specific skills and knowledge in the writing domain for postsecondary (college) students in the U.S., even though many at-risk students lack the prerequisite writing skills required to persist in their education. This paper addresses this gap through a novel…
Descriptors: Computer Software, Writing Evaluation, Writing Achievement, College Students
Higgs, Karyn; Magliano, Joseph P.; Vidal-Abarca, Eduardo; MartÃnez, Tomas; McNamara, Danielle S. – Grantee Submission, 2015
Some individual difference factors are more strongly correlated with performance on postreading questions when the text is not available than when it is. The present study explores if similar interactions occur with bridging skill, which refers to a reader's propensity to establish connections between explicit text during reading. Undergraduates…
Descriptors: Correlation, Individual Differences, Undergraduate Students, Reading Processes