Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 4 |
Descriptor
Source
Grantee Submission | 4 |
Author
DeBoer, George E. | 2 |
Hardcastle, Joseph | 2 |
Herrmann-Abell, Cari F. | 2 |
Alison K. Billman | 1 |
Amy Adair | 1 |
Christopher J. Harris | 1 |
Damelin Daniel | 1 |
Gary Weiser | 1 |
Janice Gobert | 1 |
Joe Olsen | 1 |
Lauren M. Brodsky | 1 |
More ▼ |
Publication Type
Speeches/Meeting Papers | 4 |
Reports - Research | 3 |
Reports - Descriptive | 1 |
Education Level
Elementary Education | 2 |
Elementary Secondary Education | 1 |
Grade 10 | 1 |
Grade 11 | 1 |
Grade 12 | 1 |
Grade 4 | 1 |
Grade 5 | 1 |
Grade 6 | 1 |
Grade 7 | 1 |
Grade 8 | 1 |
Grade 9 | 1 |
More ▼ |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Joe Olsen; Amy Adair; Janice Gobert; Michael Sao Pedro; Mariel O'Brien – Grantee Submission, 2022
Many national science frameworks (e.g., Next Generation Science Standards) argue that developing mathematical modeling competencies is critical for students' deep understanding of science. However, science teachers may be unprepared to assess these competencies. We are addressing this need by developing virtual lab performance assessments that…
Descriptors: Mathematical Models, Intelligent Tutoring Systems, Performance Based Assessment, Data Collection
Gary Weiser; Alison K. Billman; Christopher J. Harris; Lauren M. Brodsky; Damelin Daniel – Grantee Submission, 2022
The "Framework" and NGSS bring to the forefront the role of language in doing science and in learning from doing science. Yet, most existing science assessments for elementary learners do not integrate or attend to aspects of scientific language and literacy that are essential components of science proficiency. Accordingly, there is a…
Descriptors: Standards, Language Role, Science Instruction, Science Tests
Herrmann-Abell, Cari F.; Hardcastle, Joseph; DeBoer, George E. – Grantee Submission, 2018
We compared students' performance on a paper-based test (PBT) and three computer-based tests (CBTs). The three computer-based tests used different test navigation and answer selection features, allowing us to examine how these features affect student performance. The study sample consisted of 9,698 fourth through twelfth grade students from across…
Descriptors: Evaluation Methods, Tests, Computer Assisted Testing, Scores
Hardcastle, Joseph; Herrmann-Abell, Cari F.; DeBoer, George E. – Grantee Submission, 2017
Can student performance on computer-based tests (CBT) and paper-and-pencil tests (PPT) be considered equivalent measures of student knowledge? States and school districts are grappling with this question, and although studies addressing this question are growing, additional research is needed. We report on the performance of students who took…
Descriptors: Academic Achievement, Computer Assisted Testing, Comparative Analysis, Student Evaluation