NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Sideridis, Georgios D. – Educational and Psychological Measurement, 2016
The purpose of the present studies was to test the hypothesis that the psychometric characteristics of ability scales may be significantly distorted if one accounts for emotional factors during test taking. Specifically, the present studies evaluate the effects of anxiety and motivation on the item difficulties of the Rasch model. In Study 1, the…
Descriptors: Learning Disabilities, Test Validity, Measures (Individuals), Hierarchical Linear Modeling
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E.; Bunch, Michael B.; Deville, Craig; Viger, Steven G. – Educational and Psychological Measurement, 2014
This article describes a novel variation of the Body of Work method that uses construct maps to overcome problems of transparency, rater inconsistency, and scores gaps commonly occurring with the Body of Work method. The Body of Work method with construct maps was implemented to set cut-scores for two separate K-12 assessment programs in a large…
Descriptors: Standard Setting (Scoring), Educational Assessment, Elementary Secondary Education, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Plieninger, Hansjörg; Meiser, Thorsten – Educational and Psychological Measurement, 2014
Response styles, the tendency to respond to Likert-type items irrespective of content, are a widely known threat to the reliability and validity of self-report measures. However, it is still debated how to measure and control for response styles such as extreme responding. Recently, multiprocess item response theory models have been proposed that…
Descriptors: Validity, Item Response Theory, Rating Scales, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Lakin, Joni M.; Lai, Emily R. – Educational and Psychological Measurement, 2012
For educators seeking to differentiate instruction, cognitive ability tests sampling multiple content domains, including verbal, quantitative, and nonverbal reasoning, provide superior information about student strengths and weaknesses compared with unidimensional reasoning measures. However, these ability tests have not been fully evaluated with…
Descriptors: Aptitude Tests, Nonverbal Ability, Cognitive Ability, Verbal Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Kamphaus, Randy W.; Thorpe, Jennifer S.; Winsor, Anne Pierce; Kroncke, Anna P.; Dowdy, Erin T.; VanDeventer, Meghan C. – Educational and Psychological Measurement, 2007
A principal components analysis of the Teacher Rating Scale-Child (TRS-C) of the Behavior Assessment System for Children was conducted with a cross-sectional cohort of 659 children in Grades 1 to 5. A predictive validity study was then conducted with a 2-year longitudinal sample of 206 children. The results suggested that scores from the resulting…
Descriptors: Grade 1, Grade 2, Grade 3, Grade 4
Peer reviewed Peer reviewed
Direct linkDirect link
Woolley, Michael E.; Bowen, Gary L.; Bowen, Natasha K. – Educational and Psychological Measurement, 2006
Cognitive pretesting (CP) is an interview methodology for pretesting the validity of items during the development of self-report instruments. This article reports on the development and evaluation of a systematic method to rate self-report item validity performance utilizing CP interview text data. Five raters were trained in the application of…
Descriptors: Measurement Techniques, Validity, Pretesting, Interviews