Publication Date
In 2025 | 0 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 6 |
Descriptor
Source
Grantee Submission | 6 |
Author
Adam C. Sales | 1 |
Andrew A. McReynolds | 1 |
April L. Zenisky | 1 |
Ashish Gurung | 1 |
Ben Backes | 1 |
Ben Seipel | 1 |
DeBoer, George E. | 1 |
Eamon S. Worden | 1 |
Hardcastle, Joseph | 1 |
Herrmann-Abell, Cari F. | 1 |
James Cowan | 1 |
More ▼ |
Publication Type
Reports - Research | 5 |
Speeches/Meeting Papers | 3 |
Numerical/Quantitative Data | 1 |
Reports - Evaluative | 1 |
Education Level
Adult Education | 1 |
Elementary Education | 1 |
Elementary Secondary Education | 1 |
Higher Education | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Postsecondary Education | 1 |
Secondary Education | 1 |
Audience
Location
Africa | 1 |
Kenya | 1 |
Massachusetts | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Massachusetts Comprehensive… | 1 |
What Works Clearinghouse Rating
Ben Backes; James Cowan – Grantee Submission, 2024
We investigate two research questions using a recent statewide transition from paper to computer-based testing: first, the extent to which test mode effects found in prior studies can be eliminated in large-scale administration; and second, the degree to which online and paper assessments offer different information about underlying student…
Descriptors: Computer Assisted Testing, Test Format, Differences, Academic Achievement
Ashish Gurung; Kirk Vanacore; Andrew A. McReynolds; Korinn S. Ostrow; Eamon S. Worden; Adam C. Sales; Neil T. Heffernan – Grantee Submission, 2024
Learning experience designers consistently balance the trade-off between open and close-ended activities. The growth and scalability of Computer Based Learning Platforms (CBLPs) have only magnified the importance of these design trade-offs. CBLPs often utilize close-ended activities (i.e. Multiple-Choice Questions [MCQs]) due to feasibility…
Descriptors: Multiple Choice Tests, Testing, Test Format, Computer Assisted Testing
Stephen G. Sireci; Javier Suárez-Álvarez; April L. Zenisky; Maria Elena Oliveri – Grantee Submission, 2024
The goal in personalized assessment is to best fit the needs of each individual test taker, given the assessment purposes. Design-In-Real-Time (DIRTy) assessment reflects the progressive evolution in testing from a single test, to an adaptive test, to an adaptive assessment "system." In this paper, we lay the foundation for DIRTy…
Descriptors: Educational Assessment, Student Needs, Test Format, Test Construction
Ben Seipel; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison; Patrick C. Kennedy – Grantee Submission, 2022
Originally designed for students in Grades 3 through 5, MOCCA (formerly the Multiple-choice Online Causal Comprehension Assessment), identifies students who struggle with comprehension, and helps uncover why they struggle. There are many reasons why students might not comprehend what they read. They may struggle with decoding, or reading words…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Diagnostic Tests, Reading Tests
Lang, David; Stenhaug, Ben; Kizilcec, Rene – Grantee Submission, 2019
This research evaluates the psychometric properties of short-answer response items under a variety of grading rules in the context of a mobile learning platform in Africa. This work has three main findings. First, we introduce the concept of a differential device function (DDF), a type of differential item function that stems from the device a…
Descriptors: Foreign Countries, Psychometrics, Test Items, Test Format
Hardcastle, Joseph; Herrmann-Abell, Cari F.; DeBoer, George E. – Grantee Submission, 2017
Can student performance on computer-based tests (CBT) and paper-and-pencil tests (PPT) be considered equivalent measures of student knowledge? States and school districts are grappling with this question, and although studies addressing this question are growing, additional research is needed. We report on the performance of students who took…
Descriptors: Academic Achievement, Computer Assisted Testing, Comparative Analysis, Student Evaluation