Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 7 |
Descriptor
Factor Analysis | 63 |
Test Reliability | 63 |
Test Validity | 39 |
Factor Structure | 20 |
Test Construction | 14 |
Higher Education | 13 |
Rating Scales | 13 |
Psychometrics | 11 |
Correlation | 9 |
Item Analysis | 9 |
Measures (Individuals) | 7 |
More ▼ |
Source
Educational and Psychological… | 63 |
Author
Publication Type
Journal Articles | 48 |
Reports - Research | 44 |
Reports - Evaluative | 3 |
Numerical/Quantitative Data | 1 |
Reports - Descriptive | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 3 |
Postsecondary Education | 2 |
Elementary Education | 1 |
High Schools | 1 |
Middle Schools | 1 |
Audience
Location
Canada | 2 |
Australia | 1 |
Belgium | 1 |
Colombia | 1 |
Finland | 1 |
France | 1 |
Germany | 1 |
Jordan | 1 |
Netherlands | 1 |
Norway | 1 |
Oregon | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Deng, Lifang; Marcoulides, George A.; Yuan, Ke-Hai – Educational and Psychological Measurement, 2015
Certain diversity among team members is beneficial to the growth of an organization. Multiple measures have been proposed to quantify diversity, although little is known about their psychometric properties. This article proposes several methods to evaluate the unidimensionality and reliability of three measures of diversity. To approximate the…
Descriptors: Likert Scales, Psychometrics, Cultural Differences, Measures (Individuals)
Stanley, Leanne M.; Edwards, Michael C. – Educational and Psychological Measurement, 2016
The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…
Descriptors: Test Reliability, Goodness of Fit, Scores, Patients
Zhang, Xijuan; Savalei, Victoria – Educational and Psychological Measurement, 2016
Many psychological scales written in the Likert format include reverse worded (RW) items in order to control acquiescence bias. However, studies have shown that RW items often contaminate the factor structure of the scale by creating one or more method factors. The present study examines an alternative scale format, called the Expanded format,…
Descriptors: Factor Structure, Psychological Testing, Alternative Assessment, Test Items
Ordonez, Xavier G.; Ponsoda, Vicente; Abad, Francisco J.; Romero, Sonia J. – Educational and Psychological Measurement, 2009
This article proposes a new test (called the EQEBI) for the measurement of epistemological beliefs, integrating and extending the Epistemological Questionnaire (EQ) and the Epistemic Beliefs Inventory (EBI). In Study 1, the two tests were translated and applied to a Spanish-speaking sample. A detailed dimensionality exploration, by means of the…
Descriptors: Epistemology, Beliefs, Tests, Spanish Speaking

Ziomek, Robert L.; And Others – Educational and Psychological Measurement, 1976
The Ross Educational Philosophical Inventory was psychometrically examined through the use of expert judges and factor analysis of a sample of responses to the inventory. Results are negative in general, but moderately positive results were found in relation to two of the four categories of the inventory. (JKS)
Descriptors: Educational Philosophy, Factor Analysis, Test Reliability, Test Validity

Gorsuch, Richard L. – Educational and Psychological Measurement, 1980
Kaiser and Michael reported a formula for factor scores giving an internal consistency reliability and its square root, the domain validity. Using this formula is inappropriate if variables are included which have trival weights rather than salient weights for the factor for which the score is being computed. (Author/RL)
Descriptors: Factor Analysis, Factor Structure, Scoring Formulas, Test Reliability

McDonald, Roderick P. – Educational and Psychological Measurement, 1978
It is shown that if a behavior domain can be described by the common factor model with a finite number of factors, the squared correlation between the sum of a selection of items and the domain total score is actually greater than coefficient alpha. (Author/JKS)
Descriptors: Factor Analysis, Item Analysis, Mathematical Models, Measurement

Mays, Robert – Educational and Psychological Measurement, 1978
A FORTRAN program for clustering variables using the alpha coefficient of reliability is described. For batch operation, a rule for stopping the agglomerative precedure is available. The conversational version of the program allows the user to intervene in the process in order to test the final solution for sensitivity to changes. (Author/JKS)
Descriptors: Cluster Analysis, Computer Programs, Factor Analysis, Online Systems

Green, Samual B.; And Others – Educational and Psychological Measurement, 1977
Confusion in the literature between the concepts of internal consistency and homogeneity has led to a misuse of coefficient alpha as an index of item homogeneity. This misuse is discussed and several indices of item homogeneity derived from the model of common factor analysis are offered as alternatives. (Author/JKS)
Descriptors: Factor Analysis, Item Analysis, Test Interpretation, Test Items

Schwab, Richard L.; And Others – Educational and Psychological Measurement, 1983
The construct validity of the Role Questionnaire (Rizzo, et al.) for teachers was assessed through an analysis of the complete responses of a randomly selected sample of 448 Massachusetts teachers. The results of this cross-validation provide support for the construct validity of the questionnaire. (Author/PN)
Descriptors: Adults, Factor Analysis, Questionnaires, Role Conflict

Piotrowski, Chris; Dunham, Frances Y. – Educational and Psychological Measurement, 1984
Research on the semantic differential technique has provided evidence for variance in Osgood's formulation of dimensions of connotative meaning. Retest data based on Piotrowski's original sample is reported. Results indicate support for stability and consistency of the Evaluation dimension. Moderate consistency was found in scales comprising the…
Descriptors: Elementary Education, Factor Analysis, Factor Structure, Semantic Differential

Woehlke, Paula; Ohara, Takeshi – Educational and Psychological Measurement, 1980
To assess the stability of the factor structure of the Instructional Improvement Questionnaire (IIQ), factor analyses were run for 1973, 1974, and 1975 results, partialling out three variables: expected grade, percent taking the course as an elective, and student's year. The factors were stable over the three years. (Author/BW)
Descriptors: Factor Analysis, Factor Structure, Questionnaires, Student Evaluation of Teacher Performance

Schutz, Howard G.; Rucker, Margaret H. – Educational and Psychological Measurement, 1975
Data from 2-, 3-, 6-, and 7-point rating scales were analyzed to determine whether scale length affected response patterns. Results indicate that data configurations are relatively invariant with changes in number of scale points. (Author)
Descriptors: Data Collection, Factor Analysis, Questionnaires, Rating Scales

Werts, C. E.; And Others – Educational and Psychological Measurement, 1978
A procedure for estimating the reliability of a factorially complex composite is considered. An application of its use with Scholastic Aptitude Test data is provided. (Author/JKS)
Descriptors: Correlation, Factor Analysis, Mathematical Models, Matrices

Werts, Charles E.; Linn, Robert L. – Educational and Psychological Measurement, 1972
Descriptors: Analysis of Variance, Correlation, Factor Analysis, Mathematical Models