Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 8 |
Since 2006 (last 20 years) | 20 |
Descriptor
Response Style (Tests) | 35 |
Models | 11 |
Item Response Theory | 10 |
Test Items | 8 |
Social Desirability | 6 |
Higher Education | 5 |
Psychometrics | 5 |
Computation | 4 |
Error of Measurement | 4 |
Individual Differences | 4 |
Reaction Time | 4 |
More ▼ |
Source
Author
Eid, Michael | 2 |
Johnson, Timothy R. | 2 |
Allen, Thomas J. | 1 |
Andersen, Nico | 1 |
Anderson, John R. | 1 |
Anderson, Richard Ivan | 1 |
Barham, Mary Ann | 1 |
Bavelier, Daphne | 1 |
Bengs, Daniel | 1 |
Berger, Moritz | 1 |
Berk, Ronald A. | 1 |
More ▼ |
Publication Type
Reports - Descriptive | 35 |
Journal Articles | 26 |
Speeches/Meeting Papers | 3 |
Guides - Non-Classroom | 2 |
Tests/Questionnaires | 2 |
Collected Works - Serial | 1 |
Information Analyses | 1 |
Opinion Papers | 1 |
Audience
Researchers | 4 |
Practitioners | 2 |
Teachers | 2 |
Location
Canada | 1 |
Fiji | 1 |
Japan | 1 |
North America | 1 |
Wisconsin | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Eysenck Personality Inventory | 1 |
National Assessment of… | 1 |
What Works Clearinghouse Rating
Henninger, Mirka – Journal of Educational Measurement, 2021
Item Response Theory models with varying thresholds are essential tools to account for unknown types of response tendencies in rating data. However, in order to separate constructs to be measured and response tendencies, specific constraints have to be imposed on varying thresholds and their interrelations. In this article, a multidimensional…
Descriptors: Response Style (Tests), Item Response Theory, Models, Computation
He, Qingping; Meadows, Michelle; Black, Beth – Research Papers in Education, 2022
A potential negative consequence of high-stakes testing is inappropriate test behaviour involving individuals and/or institutions. Inappropriate test behaviour and test collusion can result in aberrant response patterns and anomalous test scores and invalidate the intended interpretation and use of test results. A variety of statistical techniques…
Descriptors: Statistical Analysis, High Stakes Tests, Scores, Response Style (Tests)
Sengül Avsar, Asiye – Measurement: Interdisciplinary Research and Perspectives, 2020
In order to reach valid and reliable test scores, various test theories have been developed, and one of them is nonparametric item response theory (NIRT). Mokken Models are the most widely known NIRT models which are useful for small samples and short tests. Mokken Package is useful for Mokken Scale Analysis. An important issue about validity is…
Descriptors: Response Style (Tests), Nonparametric Statistics, Item Response Theory, Test Validity
Ulitzsch, Esther; von Davier, Matthias; Pohl, Steffi – Educational and Psychological Measurement, 2020
So far, modeling approaches for not-reached items have considered one single underlying process. However, missing values at the end of a test can occur for a variety of reasons. On the one hand, examinees may not reach the end of a test due to time limits and lack of working speed. On the other hand, examinees may not attempt all items and quit…
Descriptors: Item Response Theory, Test Items, Response Style (Tests), Computer Assisted Testing
Zehner, Fabian; Eichmann, Beate; Deribo, Tobias; Harrison, Scott; Bengs, Daniel; Andersen, Nico; Hahnel, Carolin – Journal of Educational Data Mining, 2021
The NAEP EDM Competition required participants to predict efficient test-taking behavior based on log data. This paper describes our top-down approach for engineering features by means of psychometric modeling, aiming at machine learning for the predictive classification task. For feature engineering, we employed, among others, the Log-Normal…
Descriptors: National Competency Tests, Engineering Education, Data Collection, Data Analysis
NWEA, 2017
This document describes the following two new student engagement metrics now included on NWEA™ MAP® Growth™ reports, and provides guidance on how to interpret and use these metrics: (1) Percent of Disengaged Responses; and (2) Estimated Impact of Disengagement on RIT. These metrics will inform educators about what percentage of items from a…
Descriptors: Achievement Tests, Achievement Gains, Test Interpretation, Reaction Time
Lin, Wei-Fang; Hewitt, Gordon J.; Videras, Julio – New Directions for Institutional Research, 2017
This chapter examines the impact of declining student response rates on surveys administered at small- and medium-sized institutions. The potential for nonresponse bias and its effects are addressed.
Descriptors: National Surveys, Small Colleges, Response Rates (Questionnaires), Response Style (Tests)
Tutz, Gerhard; Berger, Moritz – Journal of Educational and Behavioral Statistics, 2016
Heterogeneity in response styles can affect the conclusions drawn from rating scale data. In particular, biased estimates can be expected if one ignores a tendency to middle categories or to extreme categories. An adjacent categories model is proposed that simultaneously models the content-related effects and the heterogeneity in response styles.…
Descriptors: Response Style (Tests), Rating Scales, Data Interpretation, Statistical Bias
Schneider, Darryl W.; Anderson, John R. – Cognitive Psychology, 2011
We propose and evaluate a memory-based model of Hick's law, the approximately linear increase in choice reaction time with the logarithm of set size (the number of stimulus-response alternatives). According to the model, Hick's law reflects a combination of associative interference during retrieval from declarative memory and occasional savings…
Descriptors: Reaction Time, Memory, Evaluation, Models
Smithson, Michael; Merkle, Edgar C.; Verkuilen, Jay – Journal of Educational and Behavioral Statistics, 2011
This paper describes the application of finite-mixture general linear models based on the beta distribution to modeling response styles, polarization, anchoring, and priming effects in probability judgments. These models, in turn, enhance our capacity for explicitly testing models and theories regarding the aforementioned phenomena. The mixture…
Descriptors: Priming, Research Methodology, Probability, Item Response Theory
Wise, Vicki L.; Barham, Mary Ann – About Campus, 2012
The August 16, 2011, "Chronicle of Higher Education" article "Want Data? Ask Students. Again and Again" by Sara Lipka posits that in higher education there is a culture of oversurveying students and too often relying on surveys as the main, or only, way of assessing the impact of programs and services on student satisfaction and learning. Because…
Descriptors: Learner Engagement, Research Methodology, Test Validity, Response Style (Tests)
Liu, Qin – Association for Institutional Research, 2012
This discussion constructs a survey data quality strategy for institutional researchers in higher education in light of total survey error theory. It starts with describing the characteristics of institutional research and identifying the gaps in literature regarding survey data quality issues in institutional research and then introduces the…
Descriptors: Institutional Research, Higher Education, Quality Control, Researchers
Masson, Michael E. J.; Rotello, Caren M. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2009
In many cognitive, metacognitive, and perceptual tasks, measurement of performance or prediction accuracy may be influenced by response bias. Signal detection theory provides a means of assessing discrimination accuracy independent of such bias, but its application crucially depends on distributional assumptions. The Goodman-Kruskal gamma…
Descriptors: Perception, Bias, Theories, Response Style (Tests)
Berk, Ronald A. – Journal of Faculty Development, 2010
Most faculty developers have a wide variety of rating scales that fly across their desk tops as their incremental program activities unfold during the academic year. The primary issue for this column is: What is the quality of those ratings used for decisions about people and programs? When students, faculty, and administrators rate a program or…
Descriptors: Response Style (Tests), Rating Scales, Faculty Development, Bias
Kubinger, Klaus D. – Educational and Psychological Measurement, 2009
The linear logistic test model (LLTM) breaks down the item parameter of the Rasch model as a linear combination of some hypothesized elementary parameters. Although the original purpose of applying the LLTM was primarily to generate test items with specified item difficulty, there are still many other potential applications, which may be of use…
Descriptors: Models, Test Items, Psychometrics, Item Response Theory