Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 9 |
Since 2006 (last 20 years) | 30 |
Descriptor
Source
Author
Alonzo, Julie | 14 |
Tindal, Gerald | 14 |
Lai, Cheng-Fei | 8 |
Park, Bitnara Jasmine | 8 |
Irvin, P. Shawn | 6 |
Anderson, Daniel | 3 |
Liu, Kimy | 3 |
Samejima, Fumiko | 3 |
Burch, Glenda | 2 |
Dulaney, Chuck | 2 |
Jamgochian, Elisa | 2 |
More ▼ |
Publication Type
Numerical/Quantitative Data | 51 |
Reports - Evaluative | 20 |
Reports - Research | 20 |
Reports - Descriptive | 9 |
Tests/Questionnaires | 5 |
Journal Articles | 3 |
Speeches/Meeting Papers | 3 |
Guides - General | 1 |
Guides - Non-Classroom | 1 |
Education Level
Elementary Education | 18 |
Secondary Education | 11 |
Middle Schools | 10 |
Higher Education | 7 |
Junior High Schools | 7 |
Elementary Secondary Education | 6 |
Grade 4 | 6 |
Grade 5 | 6 |
Postsecondary Education | 6 |
Grade 2 | 5 |
Grade 3 | 5 |
More ▼ |
Audience
Location
North Carolina | 9 |
New Mexico | 3 |
Texas (Houston) | 3 |
Oregon | 2 |
Canada | 1 |
Connecticut | 1 |
Florida | 1 |
New Hampshire | 1 |
Serbia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Meets WWC Standards without Reservations | 2 |
Meets WWC Standards with or without Reservations | 2 |
Does not meet standards | 1 |
Kuchler, Kirstin; Finestack, Lizbeth – Teaching and Learning in Communication Sciences & Disorders, 2022
The use of multiple-choice testing is common among all levels of education. This study examined one type of multiple-choice testing: the Immediate Feedback--Assessment Technique®? (IF-AT®?), which uses an answer-until-correct testing format. More than 300 undergraduate students in a speech-language-hearing sciences course used the IF-AT®? to take…
Descriptors: Multiple Choice Tests, Feedback (Response), Student Evaluation, Evaluation Methods
Tomkowicz, Joanna; Kim, Dong-In; Wan, Ping – Online Submission, 2022
In this study we evaluated the stability of item parameters and student scores, using the pre-equated (pre-pandemic) parameters from Spring 2019 and post-equated (post-pandemic) parameters from Spring 2021 in two calibration and equating designs related to item parameter treatment: re-estimating all anchor parameters (Design 1) and holding the…
Descriptors: Equated Scores, Test Items, Evaluation Methods, Pandemics
Allen, Jeff – ACT, Inc., 2020
This brief summarizes five benefits of schools providing the PreACT® to all their students. The PreACT 8/9 (an earlier version of the PreACT test that is more appropriate for students in grades 8 and 9) and PreACT assessments are designed to provide students with an indication of their educational progress in the context of preparing for the ACT®…
Descriptors: College Entrance Examinations, Test Preparation, Educational Benefits, Multiple Choice Tests
Ben Seipel; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison; Patrick C. Kennedy – Grantee Submission, 2022
Originally designed for students in Grades 3 through 5, MOCCA (formerly the Multiple-choice Online Causal Comprehension Assessment), identifies students who struggle with comprehension, and helps uncover why they struggle. There are many reasons why students might not comprehend what they read. They may struggle with decoding, or reading words…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Diagnostic Tests, Reading Tests
Steedle, Jeffrey; Pashley, Peter; Cho, YoungWoo – ACT, Inc., 2020
Three mode comparability studies were conducted on the following Saturday national ACT test dates: October 26, 2019, December 14, 2019, and February 8, 2020. The primary goal of these studies was to evaluate whether ACT scores exhibited mode effects between paper and online testing that would necessitate statistical adjustments to the online…
Descriptors: Test Format, Computer Assisted Testing, College Entrance Examinations, Scores
Alonzo, Julie; Anderson, Daniel – Behavioral Research and Teaching, 2018
In response to a request for additional analyses, in particular reporting confidence intervals around the results, we re-analyzed the data from prior studies. This supplementary report presents the results of the additional analyses addressing classification accuracy, reliability, and criterion-related validity evidence. For ease of reference, we…
Descriptors: Curriculum Based Assessment, Computation, Statistical Analysis, Classification
Li, Dongmei; Yi, Qing; Harris, Deborah – ACT, Inc., 2017
In preparation for online administration of the ACT® test, ACT conducted studies to examine the comparability of scores between online and paper administrations, including a timing study in fall 2013, a mode comparability study in spring 2014, and a second mode comparability study in spring 2015. This report presents major findings from these…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Comparative Analysis, Test Format
McSparrin-Gallagher, Brenda; Tang, Yun; Niemeier, Brian; Zoblotsky, Todd – Center for Research in Educational Policy (CREP), 2015
In August 2010, the Smithsonian Science Education Center (SSEC) received a grant of more than $25 million from the U.S. Department of Education's Investing in Innovation (i3) program for a five-year study to validate its Leadership Assistance for Science Education Reform (LASER) model in three very diverse regions of the United States: rural North…
Descriptors: Multiple Choice Tests, Science Tests, Elementary School Students, Middle School Students
Zoblotsky, Todd; Bertz, Christine; Gallagher, Brenda; Alberg, Marty – Center for Research in Educational Policy (CREP), 2017
In August 2010, the Smithsonian Science Education Center (SSEC) received a grant of more than $25 million from the U.S. Department of Education's Investing in Innovation (i3) program for a five-year study to validate its Leadership Assistance for Science Education Reform (LASER) model in three very diverse regions of the United States: rural North…
Descriptors: Multiple Choice Tests, Science Tests, Elementary School Students, Middle School Students
Chen, Haiwen H.; von Davier, Matthias; Yamamoto, Kentaro; Kong, Nan – ETS Research Report Series, 2015
One major issue with large-scale assessments is that the respondents might give no responses to many items, resulting in less accurate estimations of both assessed abilities and item parameters. This report studies how the types of items affect the item-level nonresponse rates and how different methods of treating item-level nonresponses have an…
Descriptors: Achievement Tests, Foreign Countries, International Assessment, Secondary School Students
Todd Zoblotsky; Christine Bertz; Brenda Gallagher; Marty Alberg – Center for Research in Educational Policy (CREP), 2016
In August 2010, the Smithsonian Science Education Center (SSEC) received a grant of more than $25 million from the U.S. Department of Education's Investing in Innovation (i3) program for a five-year study to validate its Leadership Assistance for Science Education Reform (LASER) model in three very diverse regions of the United States: rural North…
Descriptors: Multiple Choice Tests, Science Tests, Elementary School Students, Middle School Students
Moses, Tim; Deng, Weiling; Zhang, Yu-Li – Educational Testing Service, 2010
In the equating literature, a recurring concern is that equating functions that utilize a single anchor to account for examinee groups' nonequivalence are biased when the groups are extremely different and/or when the anchor only weakly measures what the tests measure. Several proposals have been made to address this equating bias by incorporating…
Descriptors: Equated Scores, Data Collection, Statistical Analysis, Differences
Lai, Cheng-Fei; Irvin, P. Shawn; Alonzo, Julie; Park, Bitnara Jasmine; Tindal, Gerald – Behavioral Research and Teaching, 2012
In this technical report, we present the results of a reliability study of the second-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…
Descriptors: Reading Comprehension, Testing Programs, Statistical Analysis, Elementary School Students
Lai, Cheng-Fei; Irvin, P. Shawn; Park, Bitnara Jasmine; Alonzo, Julie; Tindal, Gerald – Behavioral Research and Teaching, 2012
In this technical report, we present the results of a reliability study of the third-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…
Descriptors: Grade 3, Curriculum Based Assessment, Educational Testing, Testing Programs
Park, Bitnara Jasmine; Irvin, P. Shawn; Lai, Cheng-Fei; Alonzo, Julie; Tindal, Gerald – Behavioral Research and Teaching, 2012
In this technical report, we present the results of a reliability study of the fifth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…
Descriptors: Grade 5, Curriculum Based Assessment, Educational Testing, Testing Programs