Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 8 |
Since 2006 (last 20 years) | 56 |
Descriptor
Factor Analysis | 222 |
Test Validity | 123 |
Factor Structure | 90 |
Validity | 48 |
Test Reliability | 45 |
Construct Validity | 44 |
Higher Education | 38 |
Measures (Individuals) | 38 |
College Students | 33 |
Psychometrics | 33 |
Scores | 32 |
More ▼ |
Source
Educational and Psychological… | 222 |
Author
Publication Type
Journal Articles | 182 |
Reports - Research | 149 |
Reports - Evaluative | 30 |
Reports - Descriptive | 3 |
Speeches/Meeting Papers | 2 |
Numerical/Quantitative Data | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 16 |
High Schools | 11 |
Postsecondary Education | 5 |
Secondary Education | 3 |
Elementary Education | 2 |
Middle Schools | 2 |
Early Childhood Education | 1 |
Elementary Secondary Education | 1 |
Grade 3 | 1 |
Grade 6 | 1 |
Grade 8 | 1 |
More ▼ |
Audience
Location
Canada | 4 |
Israel | 4 |
Germany | 3 |
Australia | 2 |
Netherlands | 2 |
United States | 2 |
Africa | 1 |
Asia | 1 |
Belgium | 1 |
Finland | 1 |
France | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Tenko Raykov – Educational and Psychological Measurement, 2024
This note is concerned with the benefits that can result from the use of the maximal reliability and optimal linear combination concepts in educational and psychological research. Within the widely used framework of unidimensional multi-component measuring instruments, it is demonstrated that the linear combination of their components that…
Descriptors: Educational Research, Behavioral Science Research, Reliability, Error of Measurement
Beauducel, André; Hilger, Norbert – Educational and Psychological Measurement, 2022
In the context of Bayesian factor analysis, it is possible to compute plausible values, which might be used as covariates or predictors or to provide individual scores for the Bayesian latent variables. Previous simulation studies ascertained the validity of mean plausible values by the mean squared difference of the mean plausible values and the…
Descriptors: Bayesian Statistics, Factor Analysis, Prediction, Simulation
Pere J. Ferrando; Fabia Morales-Vives; Ana Hernández-Dorado – Educational and Psychological Measurement, 2024
In recent years, some models for binary and graded format responses have been proposed to assess unipolar variables or "quasi-traits." These studies have mainly focused on clinical variables that have traditionally been treated as bipolar traits. In the present study, we have made a proposal for unipolar traits measured with continuous…
Descriptors: Item Analysis, Goodness of Fit, Accuracy, Test Validity
Ferrando, Pere Joan; Lorenzo-Seva, Urbano – Educational and Psychological Measurement, 2019
Many psychometric measures yield data that are compatible with (a) an essentially unidimensional factor analysis solution and (b) a correlated-factor solution. Deciding which of these structures is the most appropriate and useful is of considerable importance, and various procedures have been proposed to help in this decision. The only fully…
Descriptors: Validity, Models, Correlation, Factor Analysis
Wang, Yan; Kim, Eun Sook; Dedrick, Robert F.; Ferron, John M.; Tan, Tony – Educational and Psychological Measurement, 2018
Wording effects associated with positively and negatively worded items have been found in many scales. Such effects may threaten construct validity and introduce systematic bias in the interpretation of results. A variety of models have been applied to address wording effects, such as the correlated uniqueness model and the correlated traits and…
Descriptors: Test Items, Test Format, Correlation, Construct Validity
Raykov, Tenko; Marcoulides, George A.; Li, Tenglong – Educational and Psychological Measurement, 2017
The measurement error in principal components extracted from a set of fallible measures is discussed and evaluated. It is shown that as long as one or more measures in a given set of observed variables contains error of measurement, so also does any principal component obtained from the set. The error variance in any principal component is shown…
Descriptors: Error of Measurement, Factor Analysis, Research Methodology, Psychometrics
Attali, Yigal – Educational and Psychological Measurement, 2014
This article presents a comparative judgment approach for holistically scored constructed response tasks. In this approach, the grader rank orders (rather than rate) the quality of a small set of responses. A prior automated evaluation of responses guides both set formation and scaling of rankings. Sets are formed to have similar prior scores and…
Descriptors: Responses, Item Response Theory, Scores, Rating Scales
Stanley, Leanne M.; Edwards, Michael C. – Educational and Psychological Measurement, 2016
The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…
Descriptors: Test Reliability, Goodness of Fit, Scores, Patients
Prati, Gabriele – Educational and Psychological Measurement, 2012
The study aimed to develop the Homophobic Bullying Scale and to investigate its psychometric properties. The items of the Homophobic Bullying Scale were created to measure high school students' bullying behaviors motivated by homophobia, including verbal bullying, relational bullying, physical bullying, property bullying, sexual harassment, and…
Descriptors: Factor Analysis, Validity, Measures (Individuals), Bullying
Kam, Chester Chun Seng; Zhou, Mingming – Educational and Psychological Measurement, 2015
Previous research has found the effects of acquiescence to be generally consistent across item "aggregates" within a single survey (i.e., essential tau-equivalence), but it is unknown whether this phenomenon is consistent at the" individual item" level. This article evaluated the often assumed but inadequately tested…
Descriptors: Test Items, Surveys, Criteria, Correlation
Curseu, Petru Lucian; Schruijer, Sandra G. L. – Educational and Psychological Measurement, 2012
This study investigates the relationship between the five decision-making styles evaluated by the General Decision-Making Style Inventory, indecisiveness, and rationality in decision making. Using a sample of 102 middle-level managers, the results show that the rational style positively predicts rationality in decision making and negatively…
Descriptors: Decision Making, Measures (Individuals), Predictive Validity, Middle Management
Zhang, Xijuan; Savalei, Victoria – Educational and Psychological Measurement, 2016
Many psychological scales written in the Likert format include reverse worded (RW) items in order to control acquiescence bias. However, studies have shown that RW items often contaminate the factor structure of the scale by creating one or more method factors. The present study examines an alternative scale format, called the Expanded format,…
Descriptors: Factor Structure, Psychological Testing, Alternative Assessment, Test Items
Plieninger, Hansjörg; Meiser, Thorsten – Educational and Psychological Measurement, 2014
Response styles, the tendency to respond to Likert-type items irrespective of content, are a widely known threat to the reliability and validity of self-report measures. However, it is still debated how to measure and control for response styles such as extreme responding. Recently, multiprocess item response theory models have been proposed that…
Descriptors: Validity, Item Response Theory, Rating Scales, Models
Tuccitto, Daniel E.; Giacobbi, Peter R., Jr.; Leite, Walter L. – Educational and Psychological Measurement, 2010
This study tested five confirmatory factor analytic (CFA) models of the Positive Affect Negative Affect Schedule (PANAS) to provide validity evidence based on its internal structure. A sample of 223 club sport athletes indicated their emotions during the past week. Results revealed that an orthogonal two-factor CFA model, specifying error…
Descriptors: Factor Analysis, Models, Affective Measures, Validity
Martin, Andrew J.; Malmberg, Lars-Erik; Liem, Gregory Arief D. – Educational and Psychological Measurement, 2010
Statistical biases associated with single-level analyses underscore the importance of partitioning variance/covariance matrices into individual and group levels. From a multilevel perspective based on data from 21,579 students in 58 high schools, the present study assesses the multilevel factor structure of motivation and engagement with a…
Descriptors: High School Students, Student Motivation, Learner Engagement, Measures (Individuals)