Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 8 |
Descriptor
Statistical Analysis | 8 |
Test Format | 8 |
Grade 8 | 7 |
Test Items | 6 |
Foreign Countries | 4 |
Item Response Theory | 4 |
Comparative Analysis | 3 |
Middle School Students | 3 |
Science Tests | 3 |
Test Reliability | 3 |
Test Validity | 3 |
More ▼ |
Source
Author
Aksakalli, Ayhan | 1 |
Boone, William J. | 1 |
Chang, Wanchen | 1 |
Christoph, Simon | 1 |
Conoyer, Sarah J. | 1 |
Dedrick, Robert F. | 1 |
Dodd, Barbara G. | 1 |
Ferron, John M. | 1 |
Ford, Jeremy W. | 1 |
Hosp, John L. | 1 |
Härtig, Hendrik | 1 |
More ▼ |
Publication Type
Journal Articles | 8 |
Reports - Research | 8 |
Tests/Questionnaires | 1 |
Education Level
Grade 8 | 8 |
Elementary Education | 5 |
Junior High Schools | 5 |
Middle Schools | 5 |
Secondary Education | 4 |
Elementary Secondary Education | 2 |
Grade 4 | 2 |
Grade 7 | 2 |
Grade 12 | 1 |
Grade 3 | 1 |
Grade 9 | 1 |
More ▼ |
Audience
Location
Turkey | 2 |
Germany | 1 |
Pennsylvania | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Trends in International… | 2 |
National Assessment of… | 1 |
What Works Clearinghouse Rating
Soysal, Sumeyra; Yilmaz Kogar, Esin – International Journal of Assessment Tools in Education, 2021
In this study, whether item position effects lead to DIF in the condition where different test booklets are used was investigated. To do this the methods of Lord's chi-square and Raju's unsigned area with the 3PL model under with and without item purification were used. When the performance of the methods was compared, it was revealed that…
Descriptors: Item Response Theory, Test Bias, Test Items, Comparative Analysis
Ford, Jeremy W.; Conoyer, Sarah J.; Lembke, Erica S.; Smith, R. Alex; Hosp, John L. – Assessment for Effective Intervention, 2018
In the present study, two types of curriculum-based measurement (CBM) tools in science, Vocabulary Matching (VM) and Statement Verification for Science (SV-S), a modified Sentence Verification Technique, were compared. Specifically, this study aimed to determine whether the format of information presented (i.e., SV-S vs. VM) produces differences…
Descriptors: Curriculum Based Assessment, Evaluation Methods, Measurement Techniques, Comparative Analysis
Wang, Yan; Kim, Eun Sook; Dedrick, Robert F.; Ferron, John M.; Tan, Tony – Educational and Psychological Measurement, 2018
Wording effects associated with positively and negatively worded items have been found in many scales. Such effects may threaten construct validity and introduce systematic bias in the interpretation of results. A variety of models have been applied to address wording effects, such as the correlated uniqueness model and the correlated traits and…
Descriptors: Test Items, Test Format, Correlation, Construct Validity
Aksakalli, Ayhan; Turgut, Umit; Salar, Riza – Journal of Education and Practice, 2016
The purpose of this study is to investigate whether students are more successful on abstract or illustrated test questions. To this end, the questions on an abstract test were changed into a visual format, and these tests were administered every three days to a total of 240 students at six middle schools located in the Erzurum city center and…
Descriptors: Comparative Analysis, Scores, Middle School Students, Grade 8
Schwichow, Martin; Christoph, Simon; Boone, William J.; Härtig, Hendrik – International Journal of Science Education, 2016
The so-called control-of-variables strategy (CVS) incorporates the important scientific reasoning skills of designing controlled experiments and interpreting experimental outcomes. As CVS is a prominent component of science standards appropriate assessment instruments are required to measure these scientific reasoning skills and to evaluate the…
Descriptors: Thinking Skills, Science Instruction, Science Experiments, Science Tests
Öztürk-Gübes, Nese; Kelecioglu, Hülya – Educational Sciences: Theory and Practice, 2016
The purpose of this study was to examine the impact of dimensionality, common-item set format, and different scale linking methods on preserving equity property with mixed-format test equating. Item response theory (IRT) true-score equating (TSE) and IRT observed-score equating (OSE) methods were used under common-item nonequivalent groups design.…
Descriptors: Test Format, Item Response Theory, True Scores, Equated Scores
Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G. – Applied Psychological Measurement, 2012
When tests consist of multiple-choice and constructed-response items, researchers are confronted with the question of which item response theory (IRT) model combination will appropriately represent the data collected from these mixed-format tests. This simulation study examined the performance of six model selection criteria, including the…
Descriptors: Item Response Theory, Models, Selection, Criteria
Zebehazy, Kim T.; Zigmond, Naomi; Zimmerman, George J. – Journal of Visual Impairment & Blindness, 2012
Introduction: This study investigated the use of accommodations and the performance of students with visual impairments and severe cognitive disabilities on the Pennsylvania Alternate System of Assessment (PASA)yCoan alternate performance-based assessment. Methods: Differences in test scores on the most basic level (level A) of the PASA of 286…
Descriptors: Test Items, Visual Impairments, Alternative Assessment, Special Needs Students