Publication Date
In 2025 | 1 |
Since 2024 | 5 |
Since 2021 (last 5 years) | 18 |
Since 2016 (last 10 years) | 33 |
Since 2006 (last 20 years) | 48 |
Descriptor
Response Style (Tests) | 140 |
Test Validity | 37 |
Item Response Theory | 29 |
Item Analysis | 28 |
Higher Education | 25 |
Test Items | 25 |
Rating Scales | 23 |
Test Reliability | 23 |
Testing Problems | 20 |
Personality Measures | 19 |
Correlation | 18 |
More ▼ |
Source
Educational and Psychological… | 140 |
Author
Fiske, Donald W. | 5 |
Dawis, Rene V. | 4 |
Huggins-Manley, Anne Corinne | 3 |
Schriesheim, Chester A. | 3 |
Ace, Merle E. | 2 |
Ames, Allison J. | 2 |
Bart, William M. | 2 |
Bolt, Daniel M. | 2 |
Cheng, Ying | 2 |
Huang, Hung-Yu | 2 |
Huba, G. J. | 2 |
More ▼ |
Publication Type
Journal Articles | 80 |
Reports - Research | 72 |
Reports - Evaluative | 7 |
Reports - Descriptive | 3 |
Speeches/Meeting Papers | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 9 |
Postsecondary Education | 8 |
Secondary Education | 4 |
Elementary Education | 2 |
Early Childhood Education | 1 |
Elementary Secondary Education | 1 |
Grade 2 | 1 |
Grade 3 | 1 |
High Schools | 1 |
Primary Education | 1 |
Audience
Location
Canada | 3 |
China | 2 |
Germany | 2 |
Australia | 1 |
California | 1 |
Florida | 1 |
Guyana | 1 |
Hong Kong | 1 |
Ireland | 1 |
Israel | 1 |
Japan | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Nazari, Sanaz; Leite, Walter L.; Huggins-Manley, A. Corinne – Educational and Psychological Measurement, 2023
Social desirability bias (SDB) has been a major concern in educational and psychological assessments when measuring latent variables because it has the potential to introduce measurement error and bias in assessments. Person-fit indices can detect bias in the form of misfitted response vectors. The objective of this study was to compare the…
Descriptors: Social Desirability, Bias, Indexes, Goodness of Fit
Martijn Schoenmakers; Jesper Tijmstra; Jeroen Vermunt; Maria Bolsinova – Educational and Psychological Measurement, 2024
Extreme response style (ERS), the tendency of participants to select extreme item categories regardless of the item content, has frequently been found to decrease the validity of Likert-type questionnaire results. For this reason, various item response theory (IRT) models have been proposed to model ERS and correct for it. Comparisons of these…
Descriptors: Item Response Theory, Response Style (Tests), Models, Likert Scales
Hung-Yu Huang – Educational and Psychological Measurement, 2025
The use of discrete categorical formats to assess psychological traits has a long-standing tradition that is deeply embedded in item response theory models. The increasing prevalence and endorsement of computer- or web-based testing has led to greater focus on continuous response formats, which offer numerous advantages in both respondent…
Descriptors: Response Style (Tests), Psychological Characteristics, Item Response Theory, Test Reliability
Schroeders, Ulrich; Schmidt, Christoph; Gnambs, Timo – Educational and Psychological Measurement, 2022
Careless responding is a bias in survey responses that disregards the actual item content, constituting a threat to the factor structure, reliability, and validity of psychological measurements. Different approaches have been proposed to detect aberrant responses such as probing questions that directly assess test-taking behavior (e.g., bogus…
Descriptors: Response Style (Tests), Surveys, Artificial Intelligence, Identification
Rebekka Kupffer; Susanne Frick; Eunike Wetzel – Educational and Psychological Measurement, 2024
The multidimensional forced-choice (MFC) format is an alternative to rating scales in which participants rank items according to how well the items describe them. Currently, little is known about how to detect careless responding in MFC data. The aim of this study was to adapt a number of indices used for rating scales to the MFC format and…
Descriptors: Measurement Techniques, Alternative Assessment, Rating Scales, Questionnaires
Ö. Emre C. Alagöz; Thorsten Meiser – Educational and Psychological Measurement, 2024
To improve the validity of self-report measures, researchers should control for response style (RS) effects, which can be achieved with IRTree models. A traditional IRTree model considers a response as a combination of distinct decision-making processes, where the substantive trait affects the decision on response direction, while decisions about…
Descriptors: Item Response Theory, Validity, Self Evaluation (Individuals), Decision Making
Cui, Zhongmin – Educational and Psychological Measurement, 2020
In test security analyses, answer copying, collusion, and the use of a shared brain dump site can be detected by checking similarity between item response strings. The similarity, however, can possibly be contaminated by aberrant data resulted from careless responding or rapid guessing. For example, some test-takers may answer by repeating a…
Descriptors: Repetition, Cheating, Response Style (Tests), Pattern Recognition
Rios, Joseph A.; Soland, James – Educational and Psychological Measurement, 2021
As low-stakes testing contexts increase, low test-taking effort may serve as a serious validity threat. One common solution to this problem is to identify noneffortful responses and treat them as missing during parameter estimation via the effort-moderated item response theory (EM-IRT) model. Although this model has been shown to outperform…
Descriptors: Computation, Accuracy, Item Response Theory, Response Style (Tests)
Viola Merhof; Caroline M. Böhm; Thorsten Meiser – Educational and Psychological Measurement, 2024
Item response tree (IRTree) models are a flexible framework to control self-reported trait measurements for response styles. To this end, IRTree models decompose the responses to rating items into sub-decisions, which are assumed to be made on the basis of either the trait being measured or a response style, whereby the effects of such person…
Descriptors: Item Response Theory, Test Interpretation, Test Reliability, Test Validity
Hong, Maxwell; Steedle, Jeffrey T.; Cheng, Ying – Educational and Psychological Measurement, 2020
Insufficient effort responding (IER) affects many forms of assessment in both educational and psychological contexts. Much research has examined different types of IER, IER's impact on the psychometric properties of test scores, and preprocessing procedures used to detect IER. However, there is a gap in the literature in terms of practical advice…
Descriptors: Responses, Psychometrics, Test Validity, Test Reliability
Huang, Hung-Yu – Educational and Psychological Measurement, 2023
The forced-choice (FC) item formats used for noncognitive tests typically develop a set of response options that measure different traits and instruct respondents to make judgments among these options in terms of their preference to control the response biases that are commonly observed in normative tests. Diagnostic classification models (DCMs)…
Descriptors: Test Items, Classification, Bayesian Statistics, Decision Making
D'Urso, E. Damiano; Tijmstra, Jesper; Vermunt, Jeroen K.; De Roover, Kim – Educational and Psychological Measurement, 2023
Assessing the measurement model (MM) of self-report scales is crucial to obtain valid measurements of individuals' latent psychological constructs. This entails evaluating the number of measured constructs and determining which construct is measured by which item. Exploratory factor analysis (EFA) is the most-used method to evaluate these…
Descriptors: Factor Analysis, Measurement Techniques, Self Evaluation (Individuals), Psychological Patterns
Ulitzsch, Esther; von Davier, Matthias; Pohl, Steffi – Educational and Psychological Measurement, 2020
So far, modeling approaches for not-reached items have considered one single underlying process. However, missing values at the end of a test can occur for a variety of reasons. On the one hand, examinees may not reach the end of a test due to time limits and lack of working speed. On the other hand, examinees may not attempt all items and quit…
Descriptors: Item Response Theory, Test Items, Response Style (Tests), Computer Assisted Testing
Liu, Yue; Cheng, Ying; Liu, Hongyun – Educational and Psychological Measurement, 2020
The responses of non-effortful test-takers may have serious consequences as non-effortful responses can impair model calibration and latent trait inferences. This article introduces a mixture model, using both response accuracy and response time information, to help differentiating non-effortful and effortful individuals, and to improve item…
Descriptors: Item Response Theory, Test Wiseness, Response Style (Tests), Reaction Time
Rios, Joseph A. – Educational and Psychological Measurement, 2021
Low test-taking effort as a validity threat is common when examinees perceive an assessment context to have minimal personal value. Prior research has shown that in such contexts, subgroups may differ in their effort, which raises two concerns when making subgroup mean comparisons. First, it is unclear how differential effort could influence…
Descriptors: Response Style (Tests), Statistical Analysis, Measurement, Comparative Analysis