Publication Date
In 2025 | 1 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 24 |
Since 2006 (last 20 years) | 54 |
Descriptor
Statistical Analysis | 71 |
Test Format | 71 |
Test Items | 71 |
Comparative Analysis | 24 |
Foreign Countries | 23 |
Multiple Choice Tests | 21 |
Item Response Theory | 18 |
Difficulty Level | 15 |
Item Analysis | 15 |
Scores | 14 |
Test Construction | 14 |
More ▼ |
Source
Author
Ali, Usama S. | 2 |
Abramzon, Andrea | 1 |
Abshire, Elizabeth | 1 |
Adler, Rachel | 1 |
Aizawa, Kazumi | 1 |
Aksakalli, Ayhan | 1 |
Algina, James | 1 |
Allalouf, Avi | 1 |
Alpayar, Cagla | 1 |
Bailey, Kathleen M., Ed. | 1 |
Balch, William R. | 1 |
More ▼ |
Publication Type
Education Level
Higher Education | 20 |
Postsecondary Education | 19 |
Secondary Education | 10 |
Elementary Education | 8 |
Grade 8 | 6 |
Middle Schools | 6 |
Elementary Secondary Education | 4 |
Junior High Schools | 4 |
High Schools | 3 |
Grade 4 | 2 |
Grade 7 | 2 |
More ▼ |
Audience
Researchers | 2 |
Practitioners | 1 |
Teachers | 1 |
Location
Japan | 4 |
Germany | 3 |
Netherlands | 3 |
Australia | 2 |
Pennsylvania | 2 |
Philippines | 2 |
Turkey | 2 |
Belgium | 1 |
Denmark | 1 |
Florida | 1 |
Israel | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Tom Benton – Practical Assessment, Research & Evaluation, 2025
This paper proposes an extension of linear equating that may be useful in one of two fairly common assessment scenarios. One is where different students have taken different combinations of test forms. This might occur, for example, where students have some free choice over the exam papers they take within a particular qualification. In this…
Descriptors: Equated Scores, Test Format, Test Items, Computation
Jiajing Huang – ProQuest LLC, 2022
The nonequivalent-groups anchor-test (NEAT) data-collection design is commonly used in large-scale assessments. Under this design, different test groups take different test forms. Each test form has its own unique items and all test forms share a set of common items. If item response theory (IRT) models are applied to analyze the test data, the…
Descriptors: Item Response Theory, Test Format, Test Items, Test Construction
Inga Laukaityte; Marie Wiberg – Practical Assessment, Research & Evaluation, 2024
The overall aim was to examine effects of differences in group ability and features of the anchor test form on equating bias and the standard error of equating (SEE) using both real and simulated data. Chained kernel equating, Postratification kernel equating, and Circle-arc equating were studied. A college admissions test with four different…
Descriptors: Ability Grouping, Test Items, College Entrance Examinations, High Stakes Tests
Soysal, Sumeyra; Yilmaz Kogar, Esin – International Journal of Assessment Tools in Education, 2021
In this study, whether item position effects lead to DIF in the condition where different test booklets are used was investigated. To do this the methods of Lord's chi-square and Raju's unsigned area with the 3PL model under with and without item purification were used. When the performance of the methods was compared, it was revealed that…
Descriptors: Item Response Theory, Test Bias, Test Items, Comparative Analysis
Tingir, Seyfullah – ProQuest LLC, 2019
Educators use various statistical techniques to explain relationships between latent and observable variables. One way to model these relationships is to use Bayesian networks as a scoring model. However, adjusting the conditional probability tables (CPT-parameters) to fit a set of observations is still a challenge when using Bayesian networks. A…
Descriptors: Bayesian Statistics, Statistical Analysis, Scoring, Probability
Debeer, Dries; Ali, Usama S.; van Rijn, Peter W. – Journal of Educational Measurement, 2017
Test assembly is the process of selecting items from an item pool to form one or more new test forms. Often new test forms are constructed to be parallel with an existing (or an ideal) test. Within the context of item response theory, the test information function (TIF) or the test characteristic curve (TCC) are commonly used as statistical…
Descriptors: Test Format, Test Construction, Statistical Analysis, Comparative Analysis
Kieftenbeld, Vincent; Boyer, Michelle – Applied Measurement in Education, 2017
Automated scoring systems are typically evaluated by comparing the performance of a single automated rater item-by-item to human raters. This presents a challenge when the performance of multiple raters needs to be compared across multiple items. Rankings could depend on specifics of the ranking procedure; observed differences could be due to…
Descriptors: Automation, Scoring, Comparative Analysis, Test Items
Menold, Natalja; Raykov, Tenko – Educational and Psychological Measurement, 2016
This article examines the possible dependency of composite reliability on presentation format of the elements of a multi-item measuring instrument. Using empirical data and a recent method for interval estimation of group differences in reliability, we demonstrate that the reliability of an instrument need not be the same when polarity of the…
Descriptors: Test Reliability, Test Format, Test Items, Differences
Sinharay, Sandip – Grantee Submission, 2018
Tatsuoka (1984) suggested several extended caution indices and their standardized versions that have been used as person-fit statistics by researchers such as Drasgow, Levine, and McLaughlin (1987), Glas and Meijer (2003), and Molenaar and Hoijtink (1990). However, these indices are only defined for tests with dichotomous items. This paper extends…
Descriptors: Test Format, Goodness of Fit, Item Response Theory, Error Patterns
Morgan, Grant B.; Moore, Courtney A.; Floyd, Harlee S. – Journal of Psychoeducational Assessment, 2018
Although content validity--how well each item of an instrument represents the construct being measured--is foundational in the development of an instrument, statistical validity is also important to the decisions that are made based on the instrument. The primary purpose of this study is to demonstrate how simulation studies can be used to assist…
Descriptors: Simulation, Decision Making, Test Construction, Validity
Wang, Yan; Kim, Eun Sook; Dedrick, Robert F.; Ferron, John M.; Tan, Tony – Educational and Psychological Measurement, 2018
Wording effects associated with positively and negatively worded items have been found in many scales. Such effects may threaten construct validity and introduce systematic bias in the interpretation of results. A variety of models have been applied to address wording effects, such as the correlated uniqueness model and the correlated traits and…
Descriptors: Test Items, Test Format, Correlation, Construct Validity
Tengberg, Michael – Language Testing, 2017
Reading comprehension tests are often assumed to measure the same, or at least similar, constructs. Yet, reading is not a single but a multidimensional form of processing, which means that variations in terms of reading material and item design may emphasize one aspect of the construct at the cost of another. The educational systems in Denmark,…
Descriptors: Foreign Countries, National Competency Tests, Reading Tests, Comparative Analysis
Aksakalli, Ayhan; Turgut, Umit; Salar, Riza – Journal of Education and Practice, 2016
The purpose of this study is to investigate whether students are more successful on abstract or illustrated test questions. To this end, the questions on an abstract test were changed into a visual format, and these tests were administered every three days to a total of 240 students at six middle schools located in the Erzurum city center and…
Descriptors: Comparative Analysis, Scores, Middle School Students, Grade 8
Jonick, Christine; Schneider, Jennifer; Boylan, Daniel – Accounting Education, 2017
The purpose of the research is to examine the effect of different response formats on student performance on introductory accounting exam questions. The study analyzes 1104 accounting students' responses to quantitative questions presented in two formats: multiple-choice and fill-in. Findings indicate that response format impacts student…
Descriptors: Introductory Courses, Accounting, Test Format, Multiple Choice Tests
Bendulo, Hermabeth O.; Tibus, Erlinda D.; Bande, Rhodora A.; Oyzon, Voltaire Q.; Milla, Norberto E.; Macalinao, Myrna L. – International Journal of Evaluation and Research in Education, 2017
Testing or evaluation in an educational context is primarily used to measure or evaluate and authenticate the academic readiness, learning advancement, acquisition of skills, or instructional needs of learners. This study tried to determine whether the varied combinations of arrangements of options and letter cases in a Multiple-Choice Test (MCT)…
Descriptors: Test Format, Multiple Choice Tests, Test Construction, Eye Movements