NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Applied Measurement in…15
Publication Type
Journal Articles15
Reports - Research15
Speeches/Meeting Papers1
Audience
Researchers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Xiao, Leifeng; Hau, Kit-Tai – Applied Measurement in Education, 2023
We compared coefficient alpha with five alternatives (omega total, omega RT, omega h, GLB, and coefficient H) in two simulation studies. Results showed for unidimensional scales, (a) all indices except omega h performed similarly well for most conditions; (b) alpha is still good; (c) GLB and coefficient H overestimated reliability with small…
Descriptors: Test Theory, Test Reliability, Factor Analysis, Test Length
Peer reviewed Peer reviewed
Direct linkDirect link
Anderson, Daniel; Kahn, Joshua D.; Tindal, Gerald – Applied Measurement in Education, 2017
Unidimensionality and local independence are two common assumptions of item response theory. The former implies that all items measure a common latent trait, while the latter implies that responses are independent, conditional on respondents' location on the latent trait. Yet, few tests are truly unidimensional. Unmodeled dimensions may result in…
Descriptors: Robustness (Statistics), Item Response Theory, Mathematics Tests, Grade 6
Peer reviewed Peer reviewed
Direct linkDirect link
Kahraman, Nilufer; Brown, Crystal B. – Applied Measurement in Education, 2015
Psychometric models based on structural equation modeling framework are commonly used in many multiple-choice test settings to assess measurement invariance of test items across examinee subpopulations. The premise of the current article is that they may also be useful in the context of performance assessment tests to test measurement invariance…
Descriptors: Factor Analysis, Structural Equation Models, Medical Students, Performance Based Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Oliveri, Maria; McCaffrey, Daniel; Ezzo, Chelsea; Holtzman, Steven – Applied Measurement in Education, 2017
The assessment of noncognitive traits is challenging due to possible response biases, "subjectivity" and "faking." Standardized third-party evaluations where an external evaluator rates an applicant on their strengths and weaknesses on various noncognitive traits are a promising alternative. However, accurate score-based…
Descriptors: Factor Analysis, Decision Making, College Admission, Likert Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Pokropek, Artur; Borgonovi, Francesca; McCormick, Carina – Applied Measurement in Education, 2017
Large-scale international assessments rely on indicators of the resources that students report having in their homes to capture the financial capital of their families. The scaling methodology currently used to develop the Programme for International Student Assessment (PISA) background indices is designed to maximize within-country comparability…
Descriptors: Foreign Countries, Achievement Tests, Secondary School Students, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Davis, Laurie Laughlin; Kong, Xiaojing; McBride, Yuanyuan; Morrison, Kristin M. – Applied Measurement in Education, 2017
The definition of what it means to take a test online continues to evolve with the inclusion of a broader range of item types and a wide array of devices used by students to access test content. To assure the validity and reliability of test scores for all students, device comparability research should be conducted to evaluate the impact of…
Descriptors: Educational Technology, Technology Uses in Education, High School Students, Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Keller, Lisa A.; Keller, Robert R. – Applied Measurement in Education, 2015
Equating test forms is an essential activity in standardized testing, with increased importance with the accountability systems in existence through the mandate of Adequate Yearly Progress. It is through equating that scores from different test forms become comparable, which allows for the tracking of changes in the performance of students from…
Descriptors: Item Response Theory, Rating Scales, Standardized Tests, Scoring Rubrics
Peer reviewed Peer reviewed
Direct linkDirect link
Randall, Jennifer; Engelhard, George, Jr. – Applied Measurement in Education, 2010
The psychometric properties and multigroup measurement invariance of scores across subgroups, items, and persons on the "Reading for Meaning" items from the Georgia Criterion Referenced Competency Test (CRCT) were assessed in a sample of 778 seventh-grade students. Specifically, we sought to determine the extent to which score-based…
Descriptors: Testing Accommodations, Test Items, Learning Disabilities, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Cook, Linda; Eignor, Daniel; Sawaki, Yasuyo; Steinberg, Jonathan; Cline, Frederick – Applied Measurement in Education, 2010
This study compared the underlying factors measured by a state standards-based grade 4 English-Language Arts (ELA) assessment given to several groups of students. The focus of the research was to gather evidence regarding whether or not the tests measured the same construct or constructs for students without disabilities who took the test under…
Descriptors: Language Arts, Educational Assessment, Grade 4, State Standards
Peer reviewed Peer reviewed
Direct linkDirect link
Willse, John T.; Goodman, Joshua T.; Allen, Nancy; Klaric, John – Applied Measurement in Education, 2008
The current research demonstrates the effectiveness of using structural equation modeling (SEM) for the investigation of subgroup differences with sparse data designs where not every student takes every item. Simulations were conducted that reflected missing data structures like those encountered in large survey assessment programs (e.g., National…
Descriptors: Structural Equation Models, Simulation, Item Response Theory, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes; Stage, Alan Kirk; Monahan, Patrick – Applied Measurement in Education, 2008
A primary assumption underlying several of the common methods for modeling item response data is unidimensionality, that is, test items tap into only one latent trait. This assumption can be assessed several ways, using nonlinear factor analysis and DETECT, a method based on the item conditional covariances. When multidimensionality is identified,…
Descriptors: Test Items, Factor Analysis, Item Response Theory, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes; Monahan, Patrick – Applied Measurement in Education, 2008
This article introduces a bootstrap generalization to the Modified Parallel Analysis (MPA) method of test dimensionality assessment using factor analysis. This methodology, based on the use of Marginal Maximum Likelihood nonlinear factor analysis, provides for the calculation of a test statistic based on a parametric bootstrap using the MPA…
Descriptors: Monte Carlo Methods, Factor Analysis, Generalization, Methods
Peer reviewed Peer reviewed
Anastasi, Anne – Applied Measurement in Education, 1988
The importance of defining variables to be put into correlation tables for factor-analytic research into intelligence is discussed. Research on job analysis, task decomposition, performance-related intelligence changes, and cross-cultural differences could be incorporated into definition of variables. (TJH)
Descriptors: Cross Cultural Studies, Educational Research, Factor Analysis, Intelligence
Peer reviewed Peer reviewed
Phillips, S. E.; Mehrens, William A. – Applied Measurement in Education, 1988
The impact of different elementary school curricula (grades three and six) on standardized achievement test scores at item and objective levels, and differences across curricula in generation of item factor loadings were studied. Specific and generally small differences in textbooks within a district were not significant. (TJH)
Descriptors: Academic Achievement, Achievement Tests, Curriculum Evaluation, Elementary Education
Peer reviewed Peer reviewed
Byrne, Barbara M.; Schneider, Barry H. – Applied Measurement in Education, 1988
For 241 normal and 132 gifted fifth graders and 113 normal and 117 gifted seventh graders in Ottawa (Ontario), exploratory and confirmatory factor analyses investigated the factorial validity of the Perceived Competence Scale for Children (PCSC). Overall, the PCSC demonstrated sound psychometric properties. (SLD)
Descriptors: Ability, Academically Gifted, Age Differences, Competence