NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Type
Numerical/Quantitative Data100
Reports - Research100
Speeches/Meeting Papers16
Journal Articles7
Tests/Questionnaires7
Reports - Descriptive1
Audience
Researchers3
Laws, Policies, & Programs
Showing 1 to 15 of 100 results Save | Export
Tomkowicz, Joanna; Kim, Dong-In; Wan, Ping – Online Submission, 2022
In this study we evaluated the stability of item parameters and student scores, using the pre-equated (pre-pandemic) parameters from Spring 2019 and post-equated (post-pandemic) parameters from Spring 2021 in two calibration and equating designs related to item parameter treatment: re-estimating all anchor parameters (Design 1) and holding the…
Descriptors: Equated Scores, Test Items, Evaluation Methods, Pandemics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wang, Yan; Murphy, Kevin B. – National Center for Education Statistics, 2020
In 2018, the National Center for Education Statistics (NCES) administered two assessments--the National Assessment of Educational Progress (NAEP) Technology and Engineering Literacy (TEL) assessment and the International Computer and Information Literacy Study (ICILS)--to two separate nationally representative samples of 8th-grade students in the…
Descriptors: National Competency Tests, International Assessment, Computer Literacy, Information Literacy
Susan Kowalski; Megan Kuhfeld; Scott Peters; Gustave Robinson; Karyn Lewis – NWEA, 2024
The purpose of this technical appendix is to share detailed results and more fully describe the sample and methods used to produce the research brief, "COVID's Impact on Science Achievement: Trends from 2019 through 2024. We investigated three main research questions in this brief: 1) How did science achievement in 2021 and 2024 compare to…
Descriptors: COVID-19, Pandemics, Science Achievement, Trend Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ben Seipel; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison; Patrick C. Kennedy – Grantee Submission, 2022
Originally designed for students in Grades 3 through 5, MOCCA (formerly the Multiple-choice Online Causal Comprehension Assessment), identifies students who struggle with comprehension, and helps uncover why they struggle. There are many reasons why students might not comprehend what they read. They may struggle with decoding, or reading words…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Diagnostic Tests, Reading Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mark L. Davison; David J. Weiss; Ozge Ersan; Joseph N. DeWeese; Gina Biancarosa; Patrick C. Kennedy – Grantee Submission, 2021
MOCCA is an online assessment of inferential reading comprehension for students in 3rd through 6th grades. It can be used to identify good readers and, for struggling readers, identify those who overly rely on either a Paraphrasing process or an Elaborating process when their comprehension is incorrect. Here a propensity to over-rely on…
Descriptors: Reading Tests, Computer Assisted Testing, Reading Comprehension, Elementary School Students
Weeks, Jonathan; Baron, Patricia – Educational Testing Service, 2021
The current project, Exploring Math Education Relations by Analyzing Large Data Sets (EMERALDS) II, is an attempt to identify specific Common Core State Standards procedural, conceptual, and problem-solving competencies in earlier grades that best predict success in algebraic areas in later grades. The data for this study include two cohorts of…
Descriptors: Mathematics Education, Common Core State Standards, Problem Solving, Mathematics Tests
Steedle, Jeffrey; Pashley, Peter; Cho, YoungWoo – ACT, Inc., 2020
Three mode comparability studies were conducted on the following Saturday national ACT test dates: October 26, 2019, December 14, 2019, and February 8, 2020. The primary goal of these studies was to evaluate whether ACT scores exhibited mode effects between paper and online testing that would necessitate statistical adjustments to the online…
Descriptors: Test Format, Computer Assisted Testing, College Entrance Examinations, Scores
Fraillon, Julian, Ed.; Ainley, John, Ed.; Schulz, Wolfram, Ed.; Friedman, Tim, Ed.; Duckworth, Daniel, Ed. – International Association for the Evaluation of Educational Achievement, 2020
IEA's International Computer and Information Literacy Study (ICILS) 2018 investigated how well students are prepared for study, work, and life in a digital world. ICILS 2018 measured international differences in students' computer and information literacy (CIL): their ability to use computers to investigate, create, participate, and communicate at…
Descriptors: International Assessment, Computer Literacy, Information Literacy, Computer Assisted Testing
Bramley, Tom – Cambridge Assessment, 2018
The aim of the research reported here was to get some idea of the accuracy of grade boundaries (cut-scores) obtained by applying the 'similar items method' described in Bramley & Wilson (2016). In this method experts identify items on the current version of a test that are sufficiently similar to items on previous versions for them to be…
Descriptors: Accuracy, Cutting Scores, Test Items, Item Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dahlke, Katie; Yang, Rui; Martínez, Carmen; Chavez, Suzette; Martin, Alejandra; Hawkinson, Laura; Shields, Joseph; Garland, Marshall; Carle, Jill – Regional Educational Laboratory Southwest, 2017
The New Mexico Public Education Department developed the Kindergarten Observation Tool (KOT) as a multidimensional observational measure of students' knowledge and skills at kindergarten entry. The primary purpose of the KOT is to inform instruction, so that kindergarten teachers can use the information about their students' knowledge and skills…
Descriptors: Test Validity, Observation, Measures (Individuals), Kindergarten
Smarter Balanced Assessment Consortium, 2016
The goal of this study was to gather comprehensive evidence about the alignment of the Smarter Balanced summative assessments to the Common Core State Standards (CCSS). Alignment of the Smarter Balanced summative assessments to the CCSS is a critical piece of evidence regarding the validity of inferences students, teachers and policy makers can…
Descriptors: Alignment (Education), Summative Evaluation, Common Core State Standards, Test Content
Ferrara, Steve; Steedle, Jeffrey; Kinsman, Amy – Partnership for Assessment of Readiness for College and Careers, 2015
We report results from the following three analyses of PARCC [Partnership for Assessment of Readiness for College and Careers] cognitive complexity measures, based on 2014 field test item and task development and field test data. We conducted classification and regression tree analyses using 2014 PARCC field test data to do the following: (1)…
Descriptors: Cognitive Processes, Difficulty Level, Test Items, Mathematics Tests
Nakamura, Pooja; de Hoop, Thomas – American Institutes for Research, 2014
Most of the world is multilingual--multilingual at the national level (policies), at the community and family level (practices), and at the individual level (cognitive)--and each of these has implications for teaching and learning. Yet, at present, most reading decisions are not based on empirical research of how children learn to read in…
Descriptors: Foreign Countries, Multilingualism, Reading Skills, Reading Tests
Thacker, Arthur A.; Dickinson, Emily R.; Bynum, Bethany H.; Wen, Yao; Smith, Erin; Sinclair, Andrea L.; Deatz, Richard C.; Wise, Lauress L. – Partnership for Assessment of Readiness for College and Careers, 2015
The Partnership for Assessment of Readiness for College and Careers (PARCC) field tests during the spring of 2014 provided an opportunity to investigate the quality of the items, tasks, and associated stimuli. HumRRO conducted several research studies summarized in this report. Quality of test items is integral to the "Theory of Action"…
Descriptors: Achievement Tests, Test Items, Common Core State Standards, Difficulty Level
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chen, Haiwen H.; von Davier, Matthias; Yamamoto, Kentaro; Kong, Nan – ETS Research Report Series, 2015
One major issue with large-scale assessments is that the respondents might give no responses to many items, resulting in less accurate estimations of both assessed abilities and item parameters. This report studies how the types of items affect the item-level nonresponse rates and how different methods of treating item-level nonresponses have an…
Descriptors: Achievement Tests, Foreign Countries, International Assessment, Secondary School Students
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7