Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 7 |
Since 2016 (last 10 years) | 15 |
Since 2006 (last 20 years) | 21 |
Descriptor
Computer Assisted Testing | 21 |
Test Items | 21 |
Grade 4 | 14 |
Mathematics Tests | 11 |
Test Construction | 11 |
Grade 5 | 9 |
Item Response Theory | 7 |
Achievement Tests | 6 |
Elementary School Students | 6 |
Grade 3 | 6 |
Grade 8 | 6 |
More ▼ |
Source
Author
Alonzo, Julie | 3 |
Anderson, Daniel | 3 |
Park, Bitnara Jasmine | 3 |
Tindal, Gerald | 3 |
Aaron McVay | 1 |
Agard, Christopher | 1 |
Albano, Anthony D. | 1 |
Ali, Usama | 1 |
Ann Kennedy, Editor | 1 |
Araya, Roberto | 1 |
Ayfer Sayin | 1 |
More ▼ |
Publication Type
Reports - Research | 15 |
Journal Articles | 9 |
Numerical/Quantitative Data | 5 |
Reports - Descriptive | 2 |
Reports - Evaluative | 2 |
Books | 1 |
Collected Works - General | 1 |
Dissertations/Theses -… | 1 |
Education Level
Elementary Education | 21 |
Intermediate Grades | 21 |
Middle Schools | 15 |
Grade 4 | 14 |
Secondary Education | 11 |
Grade 5 | 10 |
Junior High Schools | 10 |
Grade 8 | 7 |
Early Childhood Education | 6 |
Grade 3 | 6 |
Grade 7 | 6 |
More ▼ |
Audience
Location
Arkansas | 1 |
Colorado | 1 |
District of Columbia | 1 |
Illinois | 1 |
Maryland | 1 |
Massachusetts | 1 |
Mississippi | 1 |
Nebraska | 1 |
New Jersey | 1 |
New Mexico | 1 |
Ohio | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 4 |
Progress in International… | 2 |
Trends in International… | 2 |
International Association for… | 1 |
Measures of Academic Progress | 1 |
What Works Clearinghouse Rating
Urrutia, Felipe; Araya, Roberto – Journal of Educational Computing Research, 2024
Written answers to open-ended questions can have a higher long-term effect on learning than multiple-choice questions. However, it is critical that teachers immediately review the answers, and ask to redo those that are incoherent. This can be a difficult task and can be time-consuming for teachers. A possible solution is to automate the detection…
Descriptors: Elementary School Students, Grade 4, Elementary School Mathematics, Mathematics Tests
Aaron McVay – ProQuest LLC, 2021
As assessments move towards computerized testing and making continuous testing available the need for rapid assembly of forms is increasing. The objective of this study was to investigate variability in assembled forms through the lens of first- and second-order equity properties of equating, by examining three factors and their interactions. Two…
Descriptors: Automation, Computer Assisted Testing, Test Items, Reaction Time
Ayfer Sayin; Sabiha Bozdag; Mark J. Gierl – International Journal of Assessment Tools in Education, 2023
The purpose of this study is to generate non-verbal items for a visual reasoning test using templated-based automatic item generation (AIG). The fundamental research method involved following the three stages of template-based AIG. An item from the 2016 4th-grade entrance exam of the Science and Art Center (known as BILSEM) was chosen as the…
Descriptors: Test Items, Test Format, Nonverbal Tests, Visual Measures
Matthias von Davier, Editor; Ann Kennedy, Editor – International Association for the Evaluation of Educational Achievement, 2024
The Progress in International Reading Literacy Study (PIRLS) has been monitoring international trends in reading achievement among fourth-grade students for 25 years. As a critical point in a student's education, the fourth year of schooling establishes the foundations of literacy, with reading becoming increasingly central to learning across all…
Descriptors: Reading Achievement, Foreign Countries, Grade 4, International Assessment
Jiang, Yang; Gong, Tao; Saldivia, Luis E.; Cayton-Hodges, Gabrielle; Agard, Christopher – Large-scale Assessments in Education, 2021
In 2017, the mathematics assessments that are part of the National Assessment of Educational Progress (NAEP) program underwent a transformation shifting the administration from paper-and-pencil formats to digitally-based assessments (DBA). This shift introduced new interactive item types that bring rich process data and tremendous opportunities to…
Descriptors: Data Use, Learning Analytics, Test Items, Measurement
Jewsbury, Paul A.; van Rijn, Peter W. – Journal of Educational and Behavioral Statistics, 2020
In large-scale educational assessment data consistent with a simple-structure multidimensional item response theory (MIRT) model, where every item measures only one latent variable, separate unidimensional item response theory (UIRT) models for each latent variable are often calibrated for practical reasons. While this approach can be valid for…
Descriptors: Item Response Theory, Computation, Test Items, Adaptive Testing
Yu, Fu-Yun – Interactive Learning Environments, 2019
In view of contribution-based pedagogy and observational learning theory, students' perceived uses, preferences, usage, and selection considerations with regard to citing peers' work were examined in an online learning environment targeting student-constructed tests. Data were collected from 84 fifth-grade students who participated in online…
Descriptors: Computer Assisted Testing, Student Developed Materials, Peer Evaluation, Test Items
Solano-Flores, Guillermo; Shyyan, Vitaliy; Chía, Magda; Kachchaf, Rachel – International Multilingual Research Journal, 2023
We examined semiotic exchangeability in pop-up glossary translations and illustrations used as supports for second language learners (SLLs) in computer-administered mathematics tests. In a sample of 516 mathematics items, Grades 3-8 and 11, from a large-scale assessment program in the US, test developers identified terms that could be translated…
Descriptors: Mathematics Tests, Testing Accommodations, Test Items, Semiotics
O'Malley, Fran; Norton, Scott – American Institutes for Research, 2022
This paper provides the National Center for Education Statistics (NCES), National Assessment Governing Board (NAGB), and the National Assessment of Educational Progress (NAEP) community with information that may help maintain the validity and utility of the NAEP assessments for civics and U.S. history as revisions are planned to the NAEP…
Descriptors: National Competency Tests, United States History, Test Validity, Governing Boards
Fishbein, Bethany; Martin, Michael O.; Mullis, Ina V. S.; Foy, Pierre – Large-scale Assessments in Education, 2018
Background: TIMSS 2019 is the first assessment in the TIMSS transition to a computer-based assessment system, called eTIMSS. The TIMSS 2019 Item Equivalence Study was conducted in advance of the field test in 2017 to examine the potential for mode effects on the psychometric behavior of the TIMSS mathematics and science trend items induced by the…
Descriptors: Mathematics Achievement, Science Achievement, Mathematics Tests, Elementary Secondary Education
Li, Sylvia; Meyer, Patrick – NWEA, 2019
This simulation study examines the measurement precision, item exposure rates, and the depth of the MAP® Growth™ item pools under various grade-level restrictions. Unlike most summative assessments, MAP Growth allows examinees to see items from any grade level, regardless of the examinee's actual grade level. It does not limit the test to items…
Descriptors: Achievement Tests, Item Banks, Test Items, Instructional Program Divisions
Nebraska Department of Education, 2019
This technical report documents the processes and procedures implemented to support the Spring 2019 Nebraska Student-Centered Assessment System (NSCAS) General Summative English Language Arts (ELA), Mathematics, and Science assessments by NWEA® under the supervision of the Nebraska Department of Education (NDE). The technical report shows how the…
Descriptors: English, Language Arts, Summative Evaluation, Mathematics Tests
Wei, Hua; Lin, Jie – International Journal of Testing, 2015
Out-of-level testing refers to the practice of assessing a student with a test that is intended for students at a higher or lower grade level. Although the appropriateness of out-of-level testing for accountability purposes has been questioned by educators and policymakers, incorporating out-of-level items in formative assessments for accurate…
Descriptors: Test Items, Computer Assisted Testing, Adaptive Testing, Instructional Program Divisions
Wagemaker, Hans, Ed. – International Association for the Evaluation of Educational Achievement, 2020
Although International Association for the Evaluation of Educational Achievement-pioneered international large-scale assessment (ILSA) of education is now a well-established science, non-practitioners and many users often substantially misunderstand how large-scale assessments are conducted, what questions and challenges they are designed to…
Descriptors: International Assessment, Achievement Tests, Educational Assessment, Comparative Analysis
Liu, Junhui; Brown, Terran; Chen, Jianshen; Ali, Usama; Hou, Likun; Costanzo, Kate – Partnership for Assessment of Readiness for College and Careers, 2016
The Partnership for Assessment of Readiness for College and Careers (PARCC) is a state-led consortium working to develop next-generation assessments that more accurately, compared to previous assessments, measure student progress toward college and career readiness. The PARCC assessments include both English Language Arts/Literacy (ELA/L) and…
Descriptors: Testing, Achievement Tests, Test Items, Test Bias
Previous Page | Next Page »
Pages: 1 | 2