Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 5 |
| Since 2017 (last 10 years) | 7 |
| Since 2007 (last 20 years) | 9 |
Descriptor
| Computer Software | 13 |
| Factor Analysis | 13 |
| Item Analysis | 13 |
| Correlation | 4 |
| Rating Scales | 4 |
| Comparative Analysis | 3 |
| Construct Validity | 3 |
| Foreign Countries | 3 |
| Item Response Theory | 3 |
| Measurement | 3 |
| Teaching Methods | 3 |
| More ▼ | |
Source
Author
| Aiken, Lewis R. | 1 |
| Binyan Xu | 1 |
| Bowen Fang | 1 |
| Collins, Linda M. | 1 |
| D. H. McKinnon | 1 |
| Dengming Yao | 1 |
| Dywel, Malwina | 1 |
| Finger, Michael S. | 1 |
| Fleckenstein, Melanie | 1 |
| Fry, Elizabeth Brondos | 1 |
| Khatib, Mohammad | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 10 |
| Reports - Research | 9 |
| Reports - Descriptive | 2 |
| Speeches/Meeting Papers | 2 |
| Tests/Questionnaires | 2 |
| Books | 1 |
| Numerical/Quantitative Data | 1 |
| Reports - Evaluative | 1 |
Education Level
| Higher Education | 4 |
| Postsecondary Education | 3 |
Audience
| Researchers | 2 |
| Practitioners | 1 |
| Students | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Yimin Ning; Wenjun Zhang; Dengming Yao; Bowen Fang; Binyan Xu; Tommy Tanu Wijaya – Education and Information Technologies, 2025
The integration of AI in education highlights the significance of Teachers' AI Literacy (TAIL). Existing assessment tools, however, are hindered by incomplete indicators and a lack of practicality for large-scale application, necessitating a more systematic and credible evaluation method. This study is based on a systematic literature review and…
Descriptors: Artificial Intelligence, Rating Scales, Technological Literacy, Factor Analysis
R. Freed; D. H. McKinnon; M. T. Fitzgerald; S. Salimpour – Physical Review Physics Education Research, 2023
This paper presents the results of a confirmatory factor analysis on two self-efficacy scales designed to probe the self-efficacy of college-level introductory astronomy (Astro-101) students (n ΒΌ 15181) from 22 institutions across the United States of America and Canada. The students undertook a course based on similar curriculum materials, which…
Descriptors: Self Efficacy, Science Instruction, Astronomy, Factor Analysis
PaaBen, Benjamin; Dywel, Malwina; Fleckenstein, Melanie; Pinkwart, Niels – International Educational Data Mining Society, 2022
Item response theory (IRT) is a popular method to infer student abilities and item difficulties from observed test responses. However, IRT struggles with two challenges: How to map items to skills if multiple skills are present? And how to infer the ability of new students that have not been part of the training data? Inspired by recent advances…
Descriptors: Item Response Theory, Test Items, Item Analysis, Inferences
Legacy, Chelsey; Zieffler, Andrew; Fry, Elizabeth Brondos; Le, Laura – Statistics Education Research Journal, 2022
The influx of data and the advances in computing have led to calls to update the introductory statistics curriculum to better meet the needs of the contemporary workforce. To this end, we developed the COMputational Practices in Undergraduate TEaching of Statistics (COMPUTES) instrument, which can be used to measure the extent to which computation…
Descriptors: Statistics Education, Introductory Courses, Undergraduate Students, Teaching Methods
Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2018
This article outlines a procedure for examining the degree to which a common factor may be dominating additional factors in a multicomponent measuring instrument consisting of binary items. The procedure rests on an application of the latent variable modeling methodology and accounts for the discrete nature of the manifest indicators. The method…
Descriptors: Measurement Techniques, Factor Analysis, Item Response Theory, Likert Scales
Tajeddin, Zia; Khatib, Mohammad; Mahdavi, Mohsen – Language Testing, 2022
Critical language assessment (CLA) has been addressed in numerous studies. However, the majority of the studies have overlooked the need for a practical framework to measure the CLA dimension of teachers' language assessment literacy (LAL). This gap prompted us to develop and validate a critical language assessment literacy (CLAL) scale to further…
Descriptors: English (Second Language), Second Language Learning, Second Language Instruction, Language Tests
Shroff, Ronnie Homi; Ting, Fridolin Sze Thou; Lam, Wai Hung – Australasian Journal of Educational Technology, 2019
This article reports on the design, development, and validation of a new instrument, the Technology-Enabled Active Learning Inventory (TEAL), to measure students' perceptions of active learning in a technology-enabled learning context. By laying the theoretical foundation, a conceptual framework for technology-enabled active learning was…
Descriptors: Student Attitudes, Active Learning, Validity, Measures (Individuals)
Yurdugul, Halil – Applied Psychological Measurement, 2009
This article describes SIMREL, a software program designed for the simulation of alpha coefficients and the estimation of its confidence intervals. SIMREL runs on two alternatives. In the first one, if SIMREL is run for a single data file, it performs descriptive statistics, principal components analysis, and variance analysis of the item scores…
Descriptors: Intervals, Monte Carlo Methods, Computer Software, Factor Analysis
Nering, Michael L., Ed.; Ostini, Remo, Ed. – Routledge, Taylor & Francis Group, 2010
This comprehensive "Handbook" focuses on the most used polytomous item response theory (IRT) models. These models help us understand the interaction between examinees and test questions where the questions have various response categories. The book reviews all of the major models and includes discussions about how and where the models…
Descriptors: Guides, Item Response Theory, Test Items, Correlation
Finger, Michael S. – International Journal of Testing, 2004
MicroFACT 2.0, an item factor analysis computer program, was reviewed. Program features were described in detail, as well as the program output. The performance of MicroFACT was evaluated on a Windows 2000, Pentium III platform. MicroFACT was found to be fairly easy to use, and one problem encountered was reported. Program requirements and…
Descriptors: Computer Software, Factor Analysis, Guidelines, Purchasing
Peer reviewedCollins, Linda M.; And Others – Multivariate Behavioral Research, 1986
The present study compares the performance of phi coefficients and tetrachorics along two dimensions of factor recovery in binary data. These dimensions are (1) accuracy of nontrivial factor identifications; and (2) factor structure recovery given a priori knowledge of the correct number of factors to rotate. (Author/LMO)
Descriptors: Computer Software, Factor Analysis, Factor Structure, Item Analysis
Peer reviewedAiken, Lewis R. – Educational and Psychological Measurement, 1985
Three numerical coefficients for analyzing the validity and reliability of ratings are described. Each coefficient is computed as the ratio of an obtained to a maximum sum of differences in ratings. The coefficients are also applicable to the item analysis, agreement analysis, and cluster or factor analysis of rating-scale data. (Author/BW)
Descriptors: Computer Software, Data Analysis, Factor Analysis, Item Analysis
Muraki, Eiji – 1984
The TESTFACT computer program and full-information factor analysis of test items were used in a computer simulation conducted to correct for the guessing effect. Full-information factor analysis also corrects for omitted items. The present version of TESTFACT handles up to five factors and 150 items. A preliminary smoothing of the tetrachoric…
Descriptors: Comparative Analysis, Computer Simulation, Computer Software, Correlation

Direct link
