NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Practitioners1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 88 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Yan Xia; Xinchang Zhou – Educational and Psychological Measurement, 2025
Parallel analysis has been considered one of the most accurate methods for determining the number of factors in factor analysis. One major advantage of parallel analysis over traditional factor retention methods (e.g., Kaiser's rule) is that it addresses the sampling variability of eigenvalues obtained from the identity matrix, representing the…
Descriptors: Factor Analysis, Statistical Analysis, Evaluation Methods, Sampling
Edgar C. Merkle; Oludare Ariyo; Sonja D. Winter; Mauricio Garnier-Villarreal – Grantee Submission, 2023
We review common situations in Bayesian latent variable models where the prior distribution that a researcher specifies differs from the prior distribution used during estimation. These situations can arise from the positive definite requirement on correlation matrices, from sign indeterminacy of factor loadings, and from order constraints on…
Descriptors: Models, Bayesian Statistics, Correlation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Stefanie A. Wind; Yangmeng Xu – Educational Assessment, 2024
We explored three approaches to resolving or re-scoring constructed-response items in mixed-format assessments: rater agreement, person fit, and targeted double scoring (TDS). We used a simulation study to consider how the three approaches impact the psychometric properties of student achievement estimates, with an emphasis on person fit. We found…
Descriptors: Interrater Reliability, Error of Measurement, Evaluation Methods, Examiners
Ben Stenhaug; Ben Domingue – Grantee Submission, 2022
The fit of an item response model is typically conceptualized as whether a given model could have generated the data. We advocate for an alternative view of fit, "predictive fit", based on the model's ability to predict new data. We derive two predictive fit metrics for item response models that assess how well an estimated item response…
Descriptors: Goodness of Fit, Item Response Theory, Prediction, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Olvera Astivia, Oscar L.; Zumbo, Bruno D. – Measurement: Interdisciplinary Research and Perspectives, 2019
Methods to generate random correlation matrices have been proposed in the literature, but very few instances exist where these correlation matrices are structured or where the statistical properties of the algorithms are known. By relying on the tetrad relation discovered by Spearman and the properties of the beta distribution, an algorithm is…
Descriptors: Correlation, Psychometrics, Benchmarking, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
D'Urso, E. Damiano; Tijmstra, Jesper; Vermunt, Jeroen K.; De Roover, Kim – Educational and Psychological Measurement, 2023
Assessing the measurement model (MM) of self-report scales is crucial to obtain valid measurements of individuals' latent psychological constructs. This entails evaluating the number of measured constructs and determining which construct is measured by which item. Exploratory factor analysis (EFA) is the most-used method to evaluate these…
Descriptors: Factor Analysis, Measurement Techniques, Self Evaluation (Individuals), Psychological Patterns
Peer reviewed Peer reviewed
Direct linkDirect link
Leaman, Marion C.; Edmonds, Lisa A. – Journal of Speech, Language, and Hearing Research, 2021
Purpose: This study evaluated interrater reliability (IRR) and test-retest stability (TRTS) of seven linguistic measures (percent correct information units, relevance, subject-verb-[object], complete utterance, grammaticality, referential cohesion, global coherence), and communicative success in unstructured conversation and in a story narrative…
Descriptors: Aphasia, Psychometrics, Correlation, Speech Language Pathology
Peer reviewed Peer reviewed
Direct linkDirect link
Lambie, Glenn W.; Mullen, Patrick R.; Swank, Jacqueline M.; Blount, Ashley – Measurement and Evaluation in Counseling and Development, 2018
Supervisors evaluated counselors-in-training at multiple points during their practicum experience using the Counseling Competencies Scale (CCS; N = 1,070). The CCS evaluations were randomly split to conduct exploratory factor analysis and confirmatory factor analysis, resulting in a 2-factor model (61.5% of the variance explained).
Descriptors: Counselor Training, Counseling, Measures (Individuals), Competence
Peer reviewed Peer reviewed
Direct linkDirect link
Lawver, Rebecca G.; McKim, Billy R.; Smith, Amy R.; Aschenbrener, Mollie S.; Enns, Kellie – Career and Technical Education Research, 2016
Research on effective teaching has been conducted in a variety of settings for more than 40 years. This study offers direction for future effective teaching research in secondary agricultural education and has implications for career and technical education. Specifically, 142 items consisting of characteristics, behaviors, and/or techniques…
Descriptors: Psychometrics, Teacher Effectiveness, Agricultural Education, Secondary Education
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Minji K.; Sweeney, Kevin; Melican, Gerald J. – Educational Assessment, 2017
This study investigates the relationships among factor correlations, inter-item correlations, and the reliability estimates of subscores, providing a guideline with respect to psychometric properties of useful subscores. In addition, it compares subscore estimation methods with respect to reliability and distinctness. The subscore estimation…
Descriptors: Scores, Test Construction, Test Reliability, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Varela, Otmar; Mead, Esther – Journal of Education for Business, 2018
Popular teamwork assessments have been strongly criticized on the grounds of poor psychometric properties and their disconnect with conceptual models of teamwork. These issues raise concerns with respect to our ability to evaluate efforts devoted to advancing teamwork in academia. We report the development of a teamwork assessment that builds on…
Descriptors: Teamwork, Evaluation Methods, Test Validity, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Blikstein, Paulo; Kabayadondo, Zaza; Martin, Andrew; Fields, Deborah – Journal of Engineering Education, 2017
Background: As the maker movement is increasingly adopted into K-12 schools, students are developing new competences in exploration and fabrication technologies. This study assesses learning with these technologies in K-12 makerspaces and FabLabs. Purpose: Our study describes the iterative process of developing an assessment instrument for this…
Descriptors: Technological Literacy, Engineering Education, Skill Development, Entrepreneurship
Kankaraš, Miloš; Feron, Eva; Renbarger, Rachel – OECD Publishing, 2019
Triangulation -- a combined use of different assessment methods or sources to evaluate psychological constructs -- is still a rarely used assessment approach in spite of its potential in overcoming inherent constraints of individual assessment methods. This paper uses field test data from a new OECD Study on Social and Emotional Skills to examine…
Descriptors: Interpersonal Competence, Emotional Intelligence, Evaluation Methods, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Deng, Lifang; Marcoulides, George A.; Yuan, Ke-Hai – Educational and Psychological Measurement, 2015
Certain diversity among team members is beneficial to the growth of an organization. Multiple measures have been proposed to quantify diversity, although little is known about their psychometric properties. This article proposes several methods to evaluate the unidimensionality and reliability of three measures of diversity. To approximate the…
Descriptors: Likert Scales, Psychometrics, Cultural Differences, Measures (Individuals)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wedman, Jonathan; Lyrén, Per-Erik – Practical Assessment, Research & Evaluation, 2015
When subscores on a test are reported to the test taker, the appropriateness of reporting them depends on whether they provide useful information above what is provided by the total score. Subscores that fail to do so lack adequate psychometric quality and should not be reported. There are several methods for examining the quality of subscores,…
Descriptors: Evaluation Methods, Psychometrics, Scores, Tests
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6