Publication Date
In 2025 | 8 |
Since 2024 | 38 |
Since 2021 (last 5 years) | 108 |
Since 2016 (last 10 years) | 223 |
Since 2006 (last 20 years) | 423 |
Descriptor
Response Style (Tests) | 1398 |
Higher Education | 239 |
Test Validity | 213 |
Test Items | 191 |
Testing Problems | 175 |
Test Reliability | 172 |
College Students | 165 |
Test Construction | 165 |
Multiple Choice Tests | 160 |
Foreign Countries | 159 |
Item Analysis | 140 |
More ▼ |
Source
Author
Weiss, David J. | 12 |
Wise, Steven L. | 9 |
Bolt, Daniel M. | 7 |
Benson, Jeri | 6 |
Fiske, Donald W. | 6 |
Holden, Ronald R. | 6 |
Jackson, Douglas N. | 6 |
Adkins, Dorothy C. | 5 |
Birenbaum, Menucha | 5 |
Crocker, Linda | 5 |
Greve, Kevin W. | 5 |
More ▼ |
Publication Type
Education Level
Audience
Researchers | 58 |
Practitioners | 17 |
Teachers | 6 |
Administrators | 3 |
Counselors | 2 |
Students | 1 |
Location
Germany | 27 |
Canada | 21 |
Australia | 17 |
United States | 12 |
France | 10 |
South Korea | 10 |
United Kingdom | 10 |
China | 9 |
Denmark | 9 |
Italy | 9 |
Norway | 9 |
More ▼ |
Laws, Policies, & Programs
Elementary and Secondary… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Feinberg, Richard; Jurich, Daniel; Wise, Steven L. – Applied Measurement in Education, 2021
Previous research on rapid responding tends to implicitly consider examinees as either engaging in solution behavior or purely guessing. However, particularly in a high-stakes testing context, examinees perceiving that they are running out of time may consider the remaining items for less time than necessary to provide a fully informed response,…
Descriptors: High Stakes Tests, Reaction Time, Response Style (Tests), Licensing Examinations (Professions)
Lounek, Vítezslav; Ryška, Radim – Research in Comparative and International Education, 2023
Ensuring comparability of Likert-style items across different countries is a widespread challenge for authors of large-scale international surveys. Using data from the EUROGRADUATE Pilot Survey, this study employs a series of latent class analyses to explore which response patterns emerge from self-assessment of acquired and required skills of…
Descriptors: Self Evaluation (Individuals), Surveys, College Graduates, Multivariate Analysis
Ulitzsch, Esther; Lüdtke, Oliver; Robitzsch, Alexander – Educational Measurement: Issues and Practice, 2023
Country differences in response styles (RS) may jeopardize cross-country comparability of Likert-type scales. When adjusting for rather than investigating RS is the primary goal, it seems advantageous to impose minimal assumptions on RS structures and leverage information from multiple scales for RS measurement. Using PISA 2015 background…
Descriptors: Response Style (Tests), Comparative Analysis, Achievement Tests, Foreign Countries
Ge, Yuan – ProQuest LLC, 2022
My dissertation research explored responder behaviors (e.g., demonstrating response styles, carelessness, and possessing misconceptions) that compromise psychometric quality and impact the interpretation and use of assessment results. Identifying these behaviors can help researchers understand and minimize their potentially construct-irrelevant…
Descriptors: Test Wiseness, Response Style (Tests), Item Response Theory, Psychometrics
Zou, Tongtong; Bolt, Daniel M. – Measurement: Interdisciplinary Research and Perspectives, 2023
Person misfit and person reliability indices in item response theory (IRT) can play an important role in evaluating the validity of a test or survey instrument at the respondent level. Prior empirical comparisons of these indices have been applied to binary item response data and suggest that the two types of indices return very similar results.…
Descriptors: Item Response Theory, Rating Scales, Response Style (Tests), Measurement
Colombi, Roberto; Giordano, Sabrina; Tutz, Gerhard – Journal of Educational and Behavioral Statistics, 2021
A mixture of logit models is proposed that discriminates between responses to rating questions that are affected by a tendency to prefer middle or extremes of the scale regardless of the content of the item (response styles) and purely content-driven preferences. Explanatory variables are used to characterize the content-driven way of answering as…
Descriptors: Rating Scales, Response Style (Tests), Test Items, Models
Rios, Joseph A.; Soland, James – Educational and Psychological Measurement, 2021
As low-stakes testing contexts increase, low test-taking effort may serve as a serious validity threat. One common solution to this problem is to identify noneffortful responses and treat them as missing during parameter estimation via the effort-moderated item response theory (EM-IRT) model. Although this model has been shown to outperform…
Descriptors: Computation, Accuracy, Item Response Theory, Response Style (Tests)
Höhne, Jan Karem; Krebs, Dagmar – International Journal of Social Research Methodology, 2021
Measuring respondents' attitudes is a crucial task in numerous social science disciplines. A popular way to measure attitudes is to use survey questions with rating scales. However, research has shown that especially the design of rating scales can have a profound impact on respondents' answer behavior. While some scale design aspects, such as…
Descriptors: Attitude Measures, Rating Scales, Telephone Surveys, Response Style (Tests)
Silber, Henning; Roßmann, Joss; Gummer, Tobias – Field Methods, 2022
Attention checks detect inattentiveness by instructing respondents to perform a specific task. However, while respondents may correctly process the task, they may choose to not comply with the instructions. We investigated the issue of noncompliance in attention checks in two web surveys. In Study 1, we measured respondents' attitudes toward…
Descriptors: Compliance (Psychology), Attention, Task Analysis, Online Surveys
Papanastasiou, Elena C.; Stylianou-Georgiou, Agni – Assessment in Education: Principles, Policy & Practice, 2022
? frequently used indicator to reflect student performance is that of a test score. However, although tests are designed to assess students' knowledge or skills, other factors can also affect test results such as test-taking strategies. Therefore, the purpose of this study was to model the interrelationships among test-taking strategy instruction…
Descriptors: Test Wiseness, Metacognition, Multiple Choice Tests, Response Style (Tests)
J. A. Bialo; H. Li – Educational Assessment, 2024
This study evaluated differential item functioning (DIF) in achievement motivation items before and after using anchoring vignettes as a statistical tool to account for group differences in response styles across gender and ethnicity. We applied the nonparametric scoring of the vignettes to motivation items from the 2015 Programme for…
Descriptors: Test Bias, Student Motivation, Achievement Tests, Secondary School Students
Viola Merhof; Caroline M. Böhm; Thorsten Meiser – Educational and Psychological Measurement, 2024
Item response tree (IRTree) models are a flexible framework to control self-reported trait measurements for response styles. To this end, IRTree models decompose the responses to rating items into sub-decisions, which are assumed to be made on the basis of either the trait being measured or a response style, whereby the effects of such person…
Descriptors: Item Response Theory, Test Interpretation, Test Reliability, Test Validity
Wim J. van der Linden; Luping Niu; Seung W. Choi – Journal of Educational and Behavioral Statistics, 2024
A test battery with two different levels of adaptation is presented: a within-subtest level for the selection of the items in the subtests and a between-subtest level to move from one subtest to the next. The battery runs on a two-level model consisting of a regular response model for each of the subtests extended with a second level for the joint…
Descriptors: Adaptive Testing, Test Construction, Test Format, Test Reliability
Bulut, Hatice Cigdem – International Journal of Assessment Tools in Education, 2021
Several studies have been published on disengaged test respondents, and others have analyzed disengaged survey respondents separately. For many large-scale assessments, students answer questionnaire and test items in succession. This study examines the percentage of students who continuously engage in disengaged responding behaviors across…
Descriptors: Reaction Time, Response Style (Tests), Foreign Countries, International Assessment
Courey, Karyssa A.; Lee, Michael D. – AERA Open, 2021
Student evaluations of teaching are widely used to assess instructors and courses. Using a model-based approach and Bayesian methods, we examine how the direction of the scale, labels on scales, and the number of options affect the ratings. We conduct a within-participants experiment in which respondents evaluate instructors and lectures using…
Descriptors: Student Evaluation of Teacher Performance, Rating Scales, Response Style (Tests), College Students