NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
What Works Clearinghouse Rating
Showing 1 to 15 of 69 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Jake C. Steggerda; Sandra Yu Rueger; Ana J. Bridges – Children & Schools, 2024
Authors evaluated the Student Behavior Checklist-Brief (SBC-B) to test whether teacher-reports of student learning approach (i.e., learned helplessness [LH] and mastery orientation [MO]) were invariant across academic subjects. The current sample includes ethnically diverse seventh and eighth grade students (N = 145; 53 percent male) and six teams…
Descriptors: Psychometrics, Student Behavior, Check Lists, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Cornelis Potgieter; Xin Qiao; Akihito Kamata; Yusuf Kara – Grantee Submission, 2024
As part of the effort to develop an improved oral reading fluency (ORF) assessment system, Kara et al. (2020) estimated the ORF scores based on a latent variable psychometric model of accuracy and speed for ORF data via a fully Bayesian approach. This study further investigates likelihood-based estimators for the model-derived ORF scores,…
Descriptors: Oral Reading, Reading Fluency, Scores, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Cornelis Potgieter; Xin Qiao; Akihito Kamata; Yusuf Kara – Journal of Educational Measurement, 2024
As part of the effort to develop an improved oral reading fluency (ORF) assessment system, Kara et al. estimated the ORF scores based on a latent variable psychometric model of accuracy and speed for ORF data via a fully Bayesian approach. This study further investigates likelihood-based estimators for the model-derived ORF scores, including…
Descriptors: Oral Reading, Reading Fluency, Scores, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Cristian Zanon; Nan Zhao; Nursel Topkaya; Ertugrul Sahin; David L. Vogel; Melissa M. Ertl; Samineh Sanatkar; Hsin-Ya Liao; Mark Rubin; Makilim N. Baptista; Winnie W. S. Mak; Fatima Rashed Al-Darmaki; Georg Schomerus; Ying-Fen Wang; Dalia Nasvytiene – International Journal of Testing, 2025
Examinations of the internal structure of the Depression, Anxiety, and Stress Scale-21 (DASS-21) have yielded inconsistent conclusions within and across cultural contexts. This study examined the dimensionality and reliability of the DASS-21 across three theoretically plausible factor structures (i.e., unidimensional, oblique three-factor, and…
Descriptors: Anxiety, Depression (Psychology), Psychometrics, Cultural Context
Sophie Lilit Litschwartz – ProQuest LLC, 2021
In education research test scores are a common object of analysis. Across studies test scores can be an important outcome, a highly predictive covariate, or a means of assigning treatment. However, test scores are a measure of an underlying proficiency we can't observe directly and so contain error. This measurement error has implications for how…
Descriptors: Scores, Inferences, Educational Research, Evaluation Methods
Xue Zhang; Chun Wang – Grantee Submission, 2022
Item-level fit analysis not only serves as a complementary check to global fit analysis, it is also essential in scale development because the fit results will guide item revision and/or deletion (Liu & Maydeu-Olivares, 2014). During data collection, missing response data may likely happen due to various reasons. Chi-square-based item fit…
Descriptors: Goodness of Fit, Item Response Theory, Scores, Test Length
Peer reviewed Peer reviewed
Direct linkDirect link
Yeon Ha Kim – Journal of Early Adolescence, 2025
This study aimed to introduce an ego-resiliency questionnaire for preadolescents (the ER-P) by restructuring the ER89 using the data of 1398 preadolescents from the Panel Study on Korean Children. The ER-P was proposed as a 10-item second-order instrument with two factors (Optimal Regulation and Openness to Life Experiences). The ER-P achieved…
Descriptors: Self Concept, Resilience (Psychology), Preadolescents, Asians
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mehtap Aktas; Nezaket Bilge Uzun; Bilge Bakir Aygar – International Journal of Contemporary Educational Research, 2023
This study aims to examine the measurement invariance of the Social Media Addiction Scale (SMAS) in terms of gender, time spent on social media accounts, and the number of social media accounts. Invariance analyses conducted within the scope of the research were carried out on 672 participants. Measurement invariance studies were examined…
Descriptors: Addictive Behavior, Scores, Comparative Analysis, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Lehmann, Vicky; Hillen, Marij A.; Verdam, Mathilde G. E.; Pieterse, Arwen H.; Labrie, Nanon H. M.; Fruijtier, Agnetha D.; Oreel, Tom H.; Smets, Ellen M. A.; Visser, Leonie N. C. – International Journal of Social Research Methodology, 2023
The Video Engagement Scale (VES) is a quality indicator to assess engagement in experimental video-vignette studies, but its measurement properties warrant improvement. Data from previous studies were combined (N = 2676) and split into three subsamples for a stepped analytical approach. We tested construct validity, criterion validity,…
Descriptors: Likert Scales, Video Technology, Vignettes, Construct Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Kopp, Jason P.; Jones, Andrew T. – Applied Measurement in Education, 2020
Traditional psychometric guidelines suggest that at least several hundred respondents are needed to obtain accurate parameter estimates under the Rasch model. However, recent research indicates that Rasch equating results in accurate parameter estimates with sample sizes as small as 25. Item parameter drift under the Rasch model has been…
Descriptors: Item Response Theory, Psychometrics, Sample Size, Sampling
Kate E. Walton – ACT, Inc., 2024
There is a tradeoff between scale length and psychometric concerns. The two are, in fact, directly linked. Generally, when scales are shortened, reliability is reduced, and when scales are lengthened, reliability is improved, provided the items added to the scale are comparable psychometrically (AERA et al., 2014). Scale reliability, in turn,…
Descriptors: Psychometrics, Error of Measurement, Rating Scales, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Chang, Heesun – Language Assessment Quarterly, 2022
Drawing on the framework of invariant measurement from Rasch measurement theory, the purpose of this study is to psychometrically evaluate the 20 language and teaching skill domains of the International Teaching Assistant (ITA) Test using the many-facet Rasch model and to empirically explore performance differences between females and males in…
Descriptors: Teaching Assistants, Grammar, Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Tsaousis, Ioannis; Sideridis, Georgios D.; AlGhamdi, Hannan M. – Journal of Psychoeducational Assessment, 2021
This study evaluated the psychometric quality of a computerized adaptive testing (CAT) version of the general cognitive ability test (GCAT), using a simulation study protocol put forth by Han, K. T. (2018a). For the needs of the analysis, three different sets of items were generated, providing an item pool of 165 items. Before evaluating the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Cognitive Tests, Cognitive Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Martín-Puga, M. Eva; Pelegrina, Santiago; Gómez-Pérez, M. Mar; Justicia-Galiano, M. José – Journal of Psychoeducational Assessment, 2022
The objectives were to examine the factorial structure of the Academic Procrastination Scale-Short Form (APS-S) and the measurement invariance across gender and educational levels, to determine possible differences in procrastination across gender, educational levels, and grades. The sample was formed of 1486 Spanish primary and secondary school…
Descriptors: Psychometrics, Measures (Individuals), Study Habits, Scores
Oranje, Andreas; Kolstad, Andrew – Journal of Educational and Behavioral Statistics, 2019
The design and psychometric methodology of the National Assessment of Educational Progress (NAEP) is constantly evolving to meet the changing interests and demands stemming from a rapidly shifting educational landscape. NAEP has been built on strong research foundations that include conducting extensive evaluations and comparisons before new…
Descriptors: National Competency Tests, Psychometrics, Statistical Analysis, Computation
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5