NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ou Lydia; Rios, Joseph A.; Heilman, Michael; Gerard, Libby; Linn, Marcia C. – Journal of Research in Science Teaching, 2016
Constructed response items can both measure the coherence of student ideas and serve as reflective experiences to strengthen instruction. We report on new automated scoring technologies that can reduce the cost and complexity of scoring constructed-response items. This study explored the accuracy of c-rater-ML, an automated scoring engine…
Descriptors: Science Tests, Scoring, Automation, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ou Lydia; Brew, Chris; Blackmore, John; Gerard, Libby; Madhok, Jacquie; Linn, Marcia C. – Educational Measurement: Issues and Practice, 2014
Content-based automated scoring has been applied in a variety of science domains. However, many prior applications involved simplified scoring rubrics without considering rubrics representing multiple levels of understanding. This study tested a concept-based scoring tool for content-based scoring, c-rater™, for four science items with rubrics…
Descriptors: Science Tests, Test Items, Scoring, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Hee-Sun; Pallant, Amy; Pryputniewicz, Sarah; Lord, Trudi; Mulholland, Matthew; Liu, Ou Lydia – Science Education, 2019
This paper describes HASbot, an automated text scoring and real-time feedback system designed to support student revision of scientific arguments. Students submit open-ended text responses to explain how their data support claims and how the limitations of their data affect the uncertainty of their explanations. HASbot automatically scores these…
Descriptors: Middle School Students, High School Students, Student Evaluation, Science Education
Peer reviewed Peer reviewed
Direct linkDirect link
Mao, Liyang; Liu, Ou Lydia; Roohr, Katrina; Belur, Vinetha; Mulholland, Matthew; Lee, Hee-Sun; Pallant, Amy – Educational Assessment, 2018
Scientific argumentation is one of the core practices for teachers to implement in science classrooms. We developed a computer-based formative assessment to support students' construction and revision of scientific arguments. The assessment is built upon automated scoring of students' arguments and provides feedback to students and teachers.…
Descriptors: Computer Assisted Testing, Science Tests, Scoring, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Shen, Ji; Liu, Ou Lydia; Sung, Shannon – International Journal of Science Education, 2014
College science education needs to foster students' habit of mind beyond disciplinary constraints. However, little research has been devoted to assessing students' interdisciplinary understanding. To address this problem, we formed a team of experts from different disciplines to develop interdisciplinary assessments that target…
Descriptors: College Students, Interdisciplinary Approach, College Science, Science Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Hee-Sun; Liu, Ou Lydia; Pallant, Amy; Roohr, Katrina Crotts; Pryputniewicz, Sarah; Buck, Zoë E. – Journal of Research in Science Teaching, 2014
Though addressing sources of uncertainty is an important part of doing science, it has largely been neglected in assessing students' scientific argumentation. In this study, we initially defined a scientific argumentation construct in four structural elements consisting of claim, justification, uncertainty qualifier, and uncertainty…
Descriptors: Persuasive Discourse, Student Evaluation, High School Students, Science Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ou Lydia; Lee, Hee-Sun; Linn, Marcia C. – Educational Assessment, 2011
Both multiple-choice and constructed-response items have known advantages and disadvantages in measuring scientific inquiry. In this article we explore the function of explanation multiple-choice (EMC) items and examine how EMC items differ from traditional multiple-choice and constructed-response items in measuring scientific reasoning. A group…
Descriptors: Science Tests, Multiple Choice Tests, Responses, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ou Lydia; Ryoo, Kihyun; Linn, Marcia C.; Sato, Elissa; Svihla, Vanessa – International Journal of Science Education, 2015
Although researchers call for inquiry learning in science, science assessments rarely capture the impact of inquiry instruction. This paper reports on the development and validation of assessments designed to measure middle-school students' progress in gaining integrated understanding of energy while studying an inquiry-oriented curriculum. The…
Descriptors: Energy, Science Education, Psychometrics, Case Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Turkan, Sultan; Liu, Ou Lydia – International Journal of Science Education, 2012
The performance of English language learners (ELLs) has been a concern given the rapidly changing demographics in US K-12 education. This study aimed to examine whether students' English language status has an impact on their inquiry science performance. Differential item functioning (DIF) analysis was conducted with regard to ELL status on an…
Descriptors: Science Tests, English (Second Language), Second Language Learning, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ou Lydia; Lee, Hee-Sun; Linn, Marcia C. – Educational Assessment, 2010
To improve student science achievement in the United States we need inquiry-based instruction that promotes coherent understanding and assessments that are aligned with the instruction. Instead, current textbooks often offer fragmented ideas and most assessments only tap recall of details. In this study we implemented 10 inquiry-based science…
Descriptors: Inquiry, Active Learning, Science Achievement, Science Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ou Lydia; Lee, Hee-Sun; Hofstetter, Carolyn; Linn, Marcia C. – Educational Assessment, 2008
In response to the demand for sound science assessments, this article presents the development of a latent construct called knowledge integration as an effective measure of science inquiry. Knowledge integration assessments ask students to link, distinguish, evaluate, and organize their ideas about complex scientific topics. The article focuses on…
Descriptors: Standardized Tests, Scoring Rubrics, Psychometrics, Concept Mapping