NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 165 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Boško, Martin; Vonková, Hana; Papajoanu, Ondrej; Moore, Angie – Bulgarian Comparative Education Society, 2023
International large-scale assessments, such as Programme for International Student Assessment (PISA), are a crucial source of information for education researchers and policymakers. The assessment also includes a student questionnaire, however, the data can be biased by the differences in reporting behavior between students. In this paper, we…
Descriptors: Comparative Analysis, Response Style (Tests), Foreign Countries, Institutional Characteristics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zehner, Fabian; Harrison, Scott; Eichmann, Beate; Deribo, Tobias; Bengs, Daniel; Andersen, Nico; Hahnel, Carolin – International Educational Data Mining Society, 2020
The "2nd Annual WPI-UMASS-UPENN EDM Data Mining Challenge" required contestants to predict efficient testtaking based on log data. In this paper, we describe our theory-driven and psychometric modeling approach. For feature engineering, we employed the Log-Normal Response Time Model for estimating latent person speed, and the Generalized…
Descriptors: Data Analysis, Competition, Classification, Prediction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lang, David – Grantee Submission, 2019
Whether high-stakes exams such as the SAT or College Board AP exams should penalize incorrect answers is a controversial question. In this paper, we document that penalty functions can have differential effects depending on a student's risk tolerance. Moreover, literature shows that risk aversion tends to vary along other areas of concern such as…
Descriptors: High Stakes Tests, Risk, Item Response Theory, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Harbaugh, Allen G.; Liu, Min – AERA Online Paper Repository, 2017
This research examines the effects of nonattending response pattern contamination and select response style patterns on measures of model fit (CFI) and internal reliability (Cronbach's [alpha]). A simulation study examines the effects resulting from percentage of contamination, number of manifest items measured and sample size. Initial results…
Descriptors: Factor Analysis, Response Style (Tests), Goodness of Fit, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Steinmann, Isa; Braeken, Johan; Strietholt, Rolf – AERA Online Paper Repository, 2021
This study investigates consistent and inconsistent respondents to mixed-worded questionnaire scales in large-scale assessments. Mixed-worded scales contain both positively and negatively worded items and are universally applied in different survey and content areas. Due to the changing wording, these scales require a more careful reading and…
Descriptors: Questionnaires, Measurement, Test Items, Response Style (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Cole, Ki Matlock; Turner, Ronna L.; Gitchel, Wallace D. – AERA Online Paper Repository, 2017
This study uses the nominal response model to investigate the effects of extreme response styles. The Zung Self-Rating Anxiety Scale (SAS) is a commonly used scale for the identification of anxiety disorders. In some cases, the response options are not extreme, ranging from "A little of the time" to "Most of the time;" in other…
Descriptors: Self Evaluation (Individuals), Depression (Psychology), Rating Scales, Response Style (Tests)
Burfitt, Joan – Mathematics Education Research Group of Australasia, 2017
Multiple-choice items are used in large-scale assessments of mathematical achievement for secondary students in many countries. Research findings can be implemented to improve the quality of the items and hence increase the amount of information gathered about student learning from each item. One way to achieve this is to create items for which…
Descriptors: Multiple Choice Tests, Mathematics Tests, Credits, Knowledge Level
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Carroll, David – Journal of Institutional Research, 2011
Historically, responses to the Course Experience Questionnaire (CEQ) were required to be collected by self-administered paper or online questionnaire to be eligible for official analysis. CEQ responses collected by telephone were excluded from the final analysis file to minimise the potential for bias due to mode effects: systematic variation in…
Descriptors: Questionnaires, Regression (Statistics), Intermode Differences, Telephone Surveys
Liu, Qin – Online Submission, 2009
This paper intends to construct a survey data quality strategy for institutional researchers in higher education in light of total survey error theory. It starts with describing the characteristics of institutional research and identifying the gaps in literature regarding survey data quality issues in institutional research. Then it is followed by…
Descriptors: Higher Education, Institutional Research, Quality Control, Researchers
Shuford, Emir H., Jr. – 1974
A dicussion is provided of some statistical measures and graphical information that, when used as feedback to the student, facilitates his ability to assess his own uncertainty. These measures and graphs, which result from the application of least squares analysis and information theory to decision-theoretic testing, provide the student with the…
Descriptors: Computer Programs, Confidence Testing, Feedback, Prediction
Kong, Xiaojing J.; Bhola, Dennison S.; Wise, Steven L. – Online Submission, 2005
In this study four methods were compared for setting a response time threshold that differentiates rapid-guessing behavior from solution behavior when examinees are obliged to complete a low-stakes test. The four methods examined were: (1) a fixed threshold for all test items; (2) thresholds based on item surface features such as the amount of…
Descriptors: Reaction Time, Response Style (Tests), Methods, Achievement Tests
McCollum, Janet; Thompson, Bruce – Online Submission, 1980
Response error refers to the tendency to respond to items based on the perceived social desirability or undesirability of given responses. Response error can be particularly problematic when all or most of the items on a measure are extremely attractive or unattractive. The present paper proposes a method of (a) distinguishing among preferences…
Descriptors: Methods, Response Style (Tests), Social Desirability, Reliability
Folsom-Meek, Sherry L.; And Others – 1988
This study compared orthoptic vision development and balance performance of nonhandicapped and learning disabled children between the ages of 10 and 13 years of age. Each subject was individually administered two orthoptic vision screening tests (Cover Test and Biopter), a dynamic balance test using a changing consistency board, and two static…
Descriptors: Learning Disabilities, Physical Activities, Preadolescents, Psychomotor Skills
Belcher, Terence L.; Parisi, Sharon A. – 1974
The effects of low and high levels of test-situation stress on creativity test performance were examined. A group of 60 fifth and six graders was randomly assigned to stress situations (high, low, and control) in which verbal subtasks of the Torrance Tests of Creative Thinking (TTCT) were administered. Verbal fluency scores from the TTCT…
Descriptors: Creativity Tests, Grade 5, Grade 6, Response Style (Tests)
Garrison, Wayne M.; Stanwyck, Douglas J. – 1979
The susceptibility to faking on the Tennessee Self Concept Scale was examined among college students. Additionally, groups of respondents, instructed to respond in a "random" fashion to pre-determined numbers of items in the TSCS, were subjected to a plausibility analysis of their test response vectors using the Rasch measurement model.…
Descriptors: College Students, Higher Education, Item Analysis, Response Style (Tests)
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11