Publication Date
In 2025 | 8 |
Since 2024 | 38 |
Since 2021 (last 5 years) | 108 |
Since 2016 (last 10 years) | 223 |
Since 2006 (last 20 years) | 423 |
Descriptor
Response Style (Tests) | 1398 |
Higher Education | 239 |
Test Validity | 213 |
Test Items | 191 |
Testing Problems | 175 |
Test Reliability | 172 |
College Students | 165 |
Test Construction | 165 |
Multiple Choice Tests | 160 |
Foreign Countries | 159 |
Item Analysis | 140 |
More ▼ |
Source
Author
Weiss, David J. | 12 |
Wise, Steven L. | 9 |
Bolt, Daniel M. | 7 |
Benson, Jeri | 6 |
Fiske, Donald W. | 6 |
Holden, Ronald R. | 6 |
Jackson, Douglas N. | 6 |
Adkins, Dorothy C. | 5 |
Birenbaum, Menucha | 5 |
Crocker, Linda | 5 |
Greve, Kevin W. | 5 |
More ▼ |
Publication Type
Education Level
Audience
Researchers | 58 |
Practitioners | 17 |
Teachers | 6 |
Administrators | 3 |
Counselors | 2 |
Students | 1 |
Location
Germany | 27 |
Canada | 21 |
Australia | 17 |
United States | 12 |
France | 10 |
South Korea | 10 |
United Kingdom | 10 |
China | 9 |
Denmark | 9 |
Italy | 9 |
Norway | 9 |
More ▼ |
Laws, Policies, & Programs
Elementary and Secondary… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Cannon, Edmund; Cipriani, Giam Pietro – Assessment & Evaluation in Higher Education, 2022
Student evaluations of teaching may be subject to halo effects, where answers to one question are contaminated by answers to the other questions. Quantifying halo effects is difficult since correlation between answers may be due to underlying correlation of the items being tested. We use a novel identification procedure to test for a halo effect…
Descriptors: Student Evaluation of Teacher Performance, Bias, Response Style (Tests), Foreign Countries
He, Qingping; Meadows, Michelle; Black, Beth – Research Papers in Education, 2022
A potential negative consequence of high-stakes testing is inappropriate test behaviour involving individuals and/or institutions. Inappropriate test behaviour and test collusion can result in aberrant response patterns and anomalous test scores and invalidate the intended interpretation and use of test results. A variety of statistical techniques…
Descriptors: Statistical Analysis, High Stakes Tests, Scores, Response Style (Tests)
Herwin, Herwin; Dahalan, Shakila Che – Pegem Journal of Education and Instruction, 2022
This study aims to analyze and describe the response patterns of school exam participants based on the person fit method. This research is a quantitative study with a focus on research on social science elementary school examinations as many as 15 multiple choice items and 137 participant answer sheets. Data collection techniques were carried out…
Descriptors: Response Style (Tests), Multiple Choice Tests, Emotional Response, Psychological Patterns
Leventhal, Brian C.; Gregg, Nikole; Ames, Allison J. – Measurement: Interdisciplinary Research and Perspectives, 2022
Response styles introduce construct-irrelevant variance as a result of respondents systematically responding to Likert-type items regardless of content. Methods to account for response styles through data analysis as well as approaches to mitigating the effects of response styles during data collection have been well-documented. Recent approaches…
Descriptors: Response Style (Tests), Item Response Theory, Test Items, Likert Scales
Jacobs, Laura; Loosveldt, Geert; Beullens, Koen – Field Methods, 2020
A growing body of literature points to the possibilities offered by postsurvey interviewer observations as a source of paradata to obtain insights into data quality. However, their utility in predicting actual behavior of respondents has attracted limited scholarly attention so far. Using data from Round 7 of the European Social Survey, we aim to…
Descriptors: Interviews, Accuracy, Observation, Response Style (Tests)
Esther Ulitzsch; Steffi Pohl; Lale Khorramdel; Ulf Kroehne; Matthias von Davier – Journal of Educational and Behavioral Statistics, 2024
Questionnaires are by far the most common tool for measuring noncognitive constructs in psychology and educational sciences. Response bias may pose an additional source of variation between respondents that threatens validity of conclusions drawn from questionnaire data. We present a mixture modeling approach that leverages response time data from…
Descriptors: Item Response Theory, Response Style (Tests), Questionnaires, Secondary School Students
Faran, Yifat; Zanbar, Lea – International Journal of Social Research Methodology, 2019
The present study is the first to examine empirically whether required fields in online surveys impair reliability and response pattern, as participants forced to respond to all items may provide arbitrary answers. Two hundred and thirteen participants completed a survey consisting of six questionnaires testing personal and social issues and…
Descriptors: Online Surveys, Test Reliability, Response Style (Tests), Questionnaires
Clariana, Roy B.; Park, Eunsung – Educational Technology Research and Development, 2021
Cognitive and metacognitive processes during learning depend on accurate monitoring, this investigation examines the influence of immediate item-level knowledge of correct response feedback on cognition monitoring accuracy. In an optional end-of-course computer-based review lesson, participants (n = 68) were randomly assigned to groups to receive…
Descriptors: Feedback (Response), Cognitive Processes, Accuracy, Difficulty Level
Leventhal, Brian C.; Zigler, Christina K. – Measurement: Interdisciplinary Research and Perspectives, 2023
Survey score interpretations are often plagued by sources of construct-irrelevant variation, such as response styles. In this study, we propose the use of an IRTree Model to account for response styles by making use of self-report items and anchoring vignettes. Specifically, we investigate how the IRTree approach with anchoring vignettes compares…
Descriptors: Scores, Vignettes, Response Style (Tests), Item Response Theory
Zachary J. Roman; Patrick Schmidt; Jason M. Miller; Holger Brandt – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Careless and insufficient effort responding (C/IER) is a situation where participants respond to survey instruments without considering the item content. This phenomena adds noise to data leading to erroneous inference. There are multiple approaches to identifying and accounting for C/IER in survey settings, of these approaches the best performing…
Descriptors: Structural Equation Models, Bayesian Statistics, Response Style (Tests), Robustness (Statistics)
Eirini M. Mitropoulou; Leonidas A. Zampetakis; Ioannis Tsaousis – Evaluation Review, 2024
Unfolding item response theory (IRT) models are important alternatives to dominance IRT models in describing the response processes on self-report tests. Their usage is common in personality measures, since they indicate potential differentiations in test score interpretation. This paper aims to gain a better insight into the structure of trait…
Descriptors: Foreign Countries, Adults, Item Response Theory, Personality Traits
Rocconi, Louis M.; Dumford, Amber D.; Butler, Brenna – Research in Higher Education, 2020
Researchers, assessment professionals, and faculty in higher education increasingly depend on survey data from students to make pivotal curricular and programmatic decisions. The surveys collecting these data often require students to judge frequency (e.g., how often), quantity (e.g., how much), or intensity (e.g., how strongly). The response…
Descriptors: Student Surveys, College Students, Rating Scales, Response Style (Tests)
Zuckerbraun, Sara; Allen, Rachael Welsh; Flanigan, Tim – Field Methods, 2020
Paired interviews are used to evaluate whether a questionnaire functions properly for both the target respondent and an alternate respondent (proxy). We developed a new application of this tool to evaluate whether a Patient Experience of Care Survey (PECS) for long-term care hospitals (LTCHs) and inpatient rehabilitation facilities (IRFs)…
Descriptors: Interviews, Patients, Experience, Surveys
Lubbe, Dirk; Schuster, Christof – Journal of Educational and Behavioral Statistics, 2020
Extreme response style is the tendency of individuals to prefer the extreme categories of a rating scale irrespective of item content. It has been shown repeatedly that individual response style differences affect the reliability and validity of item responses and should, therefore, be considered carefully. To account for extreme response style…
Descriptors: Response Style (Tests), Rating Scales, Item Response Theory, Models
Martin, Silke; Lechner, Clemens; Kleinert, Corinna; Rammstedt, Beatrice – International Journal of Social Research Methodology, 2021
Selective nonresponse can introduce bias in longitudinal surveys. The present study examines the role of cognitive skills (more specifically, literacy skills), as measured in large-scale assessment surveys, in selective nonresponse in longitudinal surveys. We assume that low-skilled respondents perceive the cognitive assessment as a higher burden…
Descriptors: Literacy, Response Style (Tests), Longitudinal Studies, Foreign Countries