NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 17 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Russell, Michael; Szendey, Olivia; Li, Zhushan – Educational Assessment, 2022
Recent research provides evidence that an intersectional approach to defining reference and focal groups results in a higher percentage of comparisons flagged for potential DIF. The study presented here examined the generalizability of this pattern across methods for examining DIF. While the level of DIF detection differed among the four methods…
Descriptors: Comparative Analysis, Item Analysis, Test Items, Test Construction
Susan Kowalski; Megan Kuhfeld; Scott Peters; Gustave Robinson; Karyn Lewis – NWEA, 2024
The purpose of this technical appendix is to share detailed results and more fully describe the sample and methods used to produce the research brief, "COVID's Impact on Science Achievement: Trends from 2019 through 2024. We investigated three main research questions in this brief: 1) How did science achievement in 2021 and 2024 compare to…
Descriptors: COVID-19, Pandemics, Science Achievement, Trend Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Shanmugam, S. Kanageswari Suppiah; Veloo, Arsaythamby; Md-Ali, Ruzlan – Diaspora, Indigenous, and Minority Education, 2021
This study examined the validity of trilingual test as a test accommodation to assess the Indigenous pupils' mathematical performance in Malaysia. The study employed two tests; BM-only test with items written in Malay language (BM) and trilingual test, which had items written in BM and English, and oral audio recording in their native Temiar…
Descriptors: Multilingualism, Testing Accommodations, Grade 5, Elementary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ming; Rus, Vasile; Liu, Li – IEEE Transactions on Learning Technologies, 2018
Automatic question generation can help teachers to save the time necessary for constructing examination papers. Several approaches were proposed to automatically generate multiple-choice questions for vocabulary assessment or grammar exercises. However, most of these studies focused on generating questions in English with a certain similarity…
Descriptors: Multiple Choice Tests, Regression (Statistics), Test Items, Natural Language Processing
Liu, Junhui; Brown, Terran; Chen, Jianshen; Ali, Usama; Hou, Likun; Costanzo, Kate – Partnership for Assessment of Readiness for College and Careers, 2016
The Partnership for Assessment of Readiness for College and Careers (PARCC) is a state-led consortium working to develop next-generation assessments that more accurately, compared to previous assessments, measure student progress toward college and career readiness. The PARCC assessments include both English Language Arts/Literacy (ELA/L) and…
Descriptors: Testing, Achievement Tests, Test Items, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Yang, Kuay-Keng; Lin, Shu-Fen; Hong, Zuway-R; Lin, Huann-shyang – Creativity Research Journal, 2016
The purposes of this study were to (a) develop and validate instruments to assess elementary students' scientific creativity and science inquiry, (b) investigate the relationship between the two competencies, and (c) compare the two competencies among different grade level students. The scientific creativity test was composed of 7 open-ended items…
Descriptors: Elementary School Students, Elementary School Science, Creativity, Comparative Analysis
Steedle, Jeffrey; McBride, Malena; Johnson, Marc; Keng, Leslie – Partnership for Assessment of Readiness for College and Careers, 2016
The first operational administration of the Partnership for Assessment of Readiness for College and Careers (PARCC) took place during the 2014-2015 school year. In addition to the traditional paper-and-pencil format, the assessments were available for administration on a variety of electronic devices, including desktop computers, laptop computers,…
Descriptors: Computer Assisted Testing, Difficulty Level, Test Items, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Yu, Fu-Yun; Chen, Yi-Jun – British Journal of Educational Technology, 2014
This study investigated the effects of online drill-and-practice activities using student-generated questions on academic performance and motivation as compared with online drill-and-practice using teacher-generated questions and no drill-and-practice in a student question-generation (SQG) learning context. A quasi-experimental research method was…
Descriptors: Drills (Practice), Academic Achievement, Student Motivation, Student Participation
Peer reviewed Peer reviewed
Direct linkDirect link
Gattamorta, Karina A.; Penfield, Randall D. – Applied Measurement in Education, 2012
The study of measurement invariance in polytomous items that targets individual score levels is known as differential step functioning (DSF). The analysis of DSF requires the creation of a set of dichotomizations of the item response variable. There are two primary approaches for creating the set of dichotomizations to conduct a DSF analysis: the…
Descriptors: Measurement, Item Response Theory, Test Bias, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Williams, Laura K.; Mendiburo, Maria; Hasselbring, Ted – Society for Research on Educational Effectiveness, 2013
A strong understanding of fractions is vital to later success in mathematics. However, research has consistently shown that fractions are one of the most difficult mathematical concepts for elementary school students to master. To attempt to remedy this deficit, the authors began a project called Helping At-risk students Learn Fractions (HALF) in…
Descriptors: Mathematics Instruction, Mathematical Concepts, Elementary School Students, Concept Formation
McClellan, Catherine; Joe, Jilliam; Bassett, Katherine – National Network of State Teachers of the Year, 2016
As part of state transitions to college- and career-ready (CCR) standards, including the Common Core State Standards in more than 40 states (NGA & CCSSO, 2010), states are for the first time administering new summative assessments aligned to those standards and aiming for a higher bar in assessment quality. For a majority of states, this means…
Descriptors: Grade 5, Comparative Analysis, Common Core State Standards, Summative Evaluation
McClellan, Catherine; Joe, Jilliam; Bassett, Katherine – National Network of State Teachers of the Year, 2015
"The Right Trajectory" brings to the forefront an often-overlooked voice in the debate about new state assessments developed in consortia: that of the best teachers in the country. This research suggests, despite challenges still to overcome, that these front-line experts believe that the new consortia tests are an improvement on the…
Descriptors: Comparative Analysis, Common Core State Standards, Summative Evaluation, Grade 5
Peer reviewed Peer reviewed
Direct linkDirect link
Flowers, Claudia; Kim, Do-Hong; Lewis, Preston; Davis, Violeta Carmen – Journal of Special Education Technology, 2011
This study examined the academic performance and preference of students with disabilities for two types of test administration conditions, computer-based testing (CBT) and pencil-and-paper testing (PPT). Data from a large-scale assessment program were used to examine differences between CBT and PPT academic performance for third to eleventh grade…
Descriptors: Testing, Test Items, Effect Size, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes; Barton, Karen; Meyer, Patrick – Educational Assessment, 2009
The No Child Left Behind act resulted in an increased reliance on large-scale standardized tests to assess the progress of individual students as well as schools. In addition, emphasis was placed on including all students in the testing programs as well as those with disabilities. As a result, the role of testing accommodations has become more…
Descriptors: Test Bias, Testing Accommodations, Standardized Tests, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Stubbe, Tobias C. – Educational Research and Evaluation, 2011
The challenge inherent in cross-national research of providing instruments in different languages measuring the same construct is well known. But even instruments in a single language may be biased towards certain countries or regions due to local linguistic specificities. Consequently, it may be appropriate to use different versions of an…
Descriptors: Test Items, International Studies, Foreign Countries, German
Previous Page | Next Page ยป
Pages: 1  |  2