NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Elementary and Secondary…1
What Works Clearinghouse Rating
Showing 136 to 150 of 1,398 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Hung-Yu – Educational and Psychological Measurement, 2020
In educational assessments and achievement tests, test developers and administrators commonly assume that test-takers attempt all test items with full effort and leave no blank responses with unplanned missing values. However, aberrant response behavior--such as performance decline, dropping out beyond a certain point, and skipping certain items…
Descriptors: Item Response Theory, Response Style (Tests), Test Items, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Bürkner, Paul-Christian; Schulte, Niklas; Holling, Heinz – Educational and Psychological Measurement, 2019
Forced-choice questionnaires have been proposed to avoid common response biases typically associated with rating scale questionnaires. To overcome ipsativity issues of trait scores obtained from classical scoring approaches of forced-choice items, advanced methods from item response theory (IRT) such as the Thurstonian IRT model have been…
Descriptors: Item Response Theory, Measurement Techniques, Questionnaires, Rating Scales
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dibek, Munevver Ilgun; Cikrikci, Rahime Nukhet – International Journal of Progressive Education, 2021
This study aims to first investigate the effect of the extreme response style (ERS) which could lead to an attitude-achievement paradox among the countries participating in the Trends in International Mathematics and Science Study (TIMSS 2015), and then to determine the individual- and country-level relationships between attitude and achievement…
Descriptors: Item Response Theory, Response Style (Tests), Elementary Secondary Education, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Rios, Joseph A.; Guo, Hongwen; Mao, Liyang; Liu, Ou Lydia – International Journal of Testing, 2017
When examinees' test-taking motivation is questionable, practitioners must determine whether careless responding is of practical concern and if so, decide on the best approach to filter such responses. As there has been insufficient research on these topics, the objectives of this study were to: a) evaluate the degree of underestimation in the…
Descriptors: Response Style (Tests), Scores, Motivation, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rushkin, Ilia; Chuang, Isaac; Tingley, Dustin – Journal of Learning Analytics, 2019
Each time a learner in a self-paced online course seeks to answer an assessment question, it takes some time for the student to read the question and arrive at an answer to submit. If multiple attempts are allowed, and the first answer is incorrect, it takes some time to provide a second answer. Here we study the distribution of such…
Descriptors: Online Courses, Response Style (Tests), Models, Learner Engagement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lang, David – Grantee Submission, 2019
Whether high-stakes exams such as the SAT or College Board AP exams should penalize incorrect answers is a controversial question. In this paper, we document that penalty functions can have differential effects depending on a student's risk tolerance. Moreover, literature shows that risk aversion tends to vary along other areas of concern such as…
Descriptors: High Stakes Tests, Risk, Item Response Theory, Test Bias
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kim, Sooyeon; Moses, Tim – ETS Research Report Series, 2018
The purpose of this study is to assess the impact of aberrant responses on the estimation accuracy in forced-choice format assessments. To that end, a wide range of aberrant response behaviors (e.g., fake, random, or mechanical responses) affecting upward of 20%--30% of the responses was manipulated under the multi-unidimensional pairwise…
Descriptors: Measurement Techniques, Response Style (Tests), Accuracy, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Höhne, Jan Karem; Krebs, Dagmar – International Journal of Social Research Methodology, 2018
The effect of the response scale direction on response behavior is a well-known phenomenon in survey research. While there are several approaches to explaining how such response order effects occur, the literature reports mixed evidence. Furthermore, different question formats seem to vary in their susceptibility to these effects. We therefore…
Descriptors: Test Items, Response Style (Tests), Questioning Techniques, Questionnaires
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zehner, Fabian; Eichmann, Beate; Deribo, Tobias; Harrison, Scott; Bengs, Daniel; Andersen, Nico; Hahnel, Carolin – Journal of Educational Data Mining, 2021
The NAEP EDM Competition required participants to predict efficient test-taking behavior based on log data. This paper describes our top-down approach for engineering features by means of psychometric modeling, aiming at machine learning for the predictive classification task. For feature engineering, we employed, among others, the Log-Normal…
Descriptors: National Competency Tests, Engineering Education, Data Collection, Data Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Domínguez, César; López-Cuadrado, Javier; Armendariz, Anaje; Jaime, Arturo; Heras, Jónathan; Pérez, Tomás A. – Computer Assisted Language Learning, 2019
In this work, we explore the differences between proctored and unproctored Internet administration for a Basque language low-stakes test considering demographic factors such as age, gender, and knowledge level in the subject. To this aim, we have developed an ad hoc application that allows us to establish a set of filters and techniques that…
Descriptors: Language Tests, Computer Assisted Testing, Supervision, Internet
Peer reviewed Peer reviewed
Direct linkDirect link
Patton, Jeffrey M.; Cheng, Ying; Hong, Maxwell; Diao, Qi – Journal of Educational and Behavioral Statistics, 2019
In psychological and survey research, the prevalence and serious consequences of careless responses from unmotivated participants are well known. In this study, we propose to iteratively detect careless responders and cleanse the data by removing their responses. The careless responders are detected using person-fit statistics. In two simulation…
Descriptors: Test Items, Response Style (Tests), Identification, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Höhne, Jan Karem; Schlosser, Stephan – International Journal of Social Research Methodology, 2019
Participation in web surveys via smartphones increased continuously in recent years. The reasons for this increase are a growing proportion of smartphone owners and an increase in mobile Internet access. However, research has shown that smartphone respondents are frequently distracted and/or multitasking, which might affect completion and response…
Descriptors: Online Surveys, Handheld Devices, Response Rates (Questionnaires), Response Style (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Terentev, Evgeniy; Maloshonok, Natalia – International Journal of Social Research Methodology, 2019
This paper aims to explore the response-order effects for rating questions presented in item-by-item and grid formats. It was hypothesized that the primacy effect occurs for both formats of questions, and that this effect is dependent on age, education, and type of device used for responding to questions. Two randomized experiments were conducted…
Descriptors: Questioning Techniques, Test Format, Online Courses, Student Surveys
Peer reviewed Peer reviewed
Direct linkDirect link
Loosveldt, Geert; Wuyts, Celine; Beullens, Koen – Quality Assurance in Education: An International Perspective, 2018
Purpose: In survey methodology, it is well-known that interviewers can have an impact on the registered answers. This paper aims to focus on one type of interviewer effect that arises from the differences between interviewers in the systematic effects of each interviewer on the answers. In the first case, the authors evaluate interviewer effects…
Descriptors: Interviews, Foreign Countries, Differences, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Kam, Chester Chun Seng – Sociological Methods & Research, 2018
The item wording (or keying) effect is respondents' differential response style to positively and negatively worded items. Despite decades of research, the nature of the effect is still unclear. This article proposes a potential reason; namely, that the item wording effect is scale-specific, and thus findings are applicable only to a particular…
Descriptors: Response Style (Tests), Test Items, Language Usage, College Students
Pages: 1  |  ...  |  6  |  7  |  8  |  9  |  10  |  11  |  12  |  13  |  14  |  ...  |  94