Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 15 |
Descriptor
Factor Analysis | 19 |
Item Response Theory | 7 |
Test Items | 6 |
Comparative Analysis | 4 |
Psychometrics | 4 |
Achievement Tests | 3 |
Computer Assisted Testing | 3 |
Grade 7 | 3 |
Scores | 3 |
Simulation | 3 |
Standardized Tests | 3 |
More ▼ |
Source
Applied Measurement in… | 19 |
Author
Finch, Holmes | 2 |
Kahraman, Nilufer | 2 |
Monahan, Patrick | 2 |
Allen, Nancy | 1 |
Anastasi, Anne | 1 |
Anderson, Daniel | 1 |
Borgonovi, Francesca | 1 |
Brown, Crystal B. | 1 |
Byrne, Barbara M. | 1 |
Cline, Frederick | 1 |
Cook, Linda | 1 |
More ▼ |
Publication Type
Journal Articles | 19 |
Reports - Research | 15 |
Reports - Evaluative | 3 |
Reports - Descriptive | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Secondary Education | 4 |
Elementary Education | 2 |
Grade 7 | 2 |
Grade 8 | 2 |
High Schools | 2 |
Higher Education | 2 |
Middle Schools | 2 |
Elementary Secondary Education | 1 |
Grade 4 | 1 |
Grade 5 | 1 |
Grade 6 | 1 |
More ▼ |
Audience
Researchers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
International Adult Literacy… | 1 |
Perceived Competence Scale… | 1 |
Program for International… | 1 |
Stanford Achievement Tests | 1 |
What Works Clearinghouse Rating
Xiao, Leifeng; Hau, Kit-Tai – Applied Measurement in Education, 2023
We compared coefficient alpha with five alternatives (omega total, omega RT, omega h, GLB, and coefficient H) in two simulation studies. Results showed for unidimensional scales, (a) all indices except omega h performed similarly well for most conditions; (b) alpha is still good; (c) GLB and coefficient H overestimated reliability with small…
Descriptors: Test Theory, Test Reliability, Factor Analysis, Test Length
Anderson, Daniel; Kahn, Joshua D.; Tindal, Gerald – Applied Measurement in Education, 2017
Unidimensionality and local independence are two common assumptions of item response theory. The former implies that all items measure a common latent trait, while the latter implies that responses are independent, conditional on respondents' location on the latent trait. Yet, few tests are truly unidimensional. Unmodeled dimensions may result in…
Descriptors: Robustness (Statistics), Item Response Theory, Mathematics Tests, Grade 6
Kahraman, Nilufer; Brown, Crystal B. – Applied Measurement in Education, 2015
Psychometric models based on structural equation modeling framework are commonly used in many multiple-choice test settings to assess measurement invariance of test items across examinee subpopulations. The premise of the current article is that they may also be useful in the context of performance assessment tests to test measurement invariance…
Descriptors: Factor Analysis, Structural Equation Models, Medical Students, Performance Based Assessment
Oliveri, Maria; McCaffrey, Daniel; Ezzo, Chelsea; Holtzman, Steven – Applied Measurement in Education, 2017
The assessment of noncognitive traits is challenging due to possible response biases, "subjectivity" and "faking." Standardized third-party evaluations where an external evaluator rates an applicant on their strengths and weaknesses on various noncognitive traits are a promising alternative. However, accurate score-based…
Descriptors: Factor Analysis, Decision Making, College Admission, Likert Scales
Pokropek, Artur; Borgonovi, Francesca; McCormick, Carina – Applied Measurement in Education, 2017
Large-scale international assessments rely on indicators of the resources that students report having in their homes to capture the financial capital of their families. The scaling methodology currently used to develop the Programme for International Student Assessment (PISA) background indices is designed to maximize within-country comparability…
Descriptors: Foreign Countries, Achievement Tests, Secondary School Students, International Assessment
Davis, Laurie Laughlin; Kong, Xiaojing; McBride, Yuanyuan; Morrison, Kristin M. – Applied Measurement in Education, 2017
The definition of what it means to take a test online continues to evolve with the inclusion of a broader range of item types and a wide array of devices used by students to access test content. To assure the validity and reliability of test scores for all students, device comparability research should be conducted to evaluate the impact of…
Descriptors: Educational Technology, Technology Uses in Education, High School Students, Tests
Keller, Lisa A.; Keller, Robert R. – Applied Measurement in Education, 2015
Equating test forms is an essential activity in standardized testing, with increased importance with the accountability systems in existence through the mandate of Adequate Yearly Progress. It is through equating that scores from different test forms become comparable, which allows for the tracking of changes in the performance of students from…
Descriptors: Item Response Theory, Rating Scales, Standardized Tests, Scoring Rubrics
Kahraman, Nilufer; De Champlain, Andre; Raymond, Mark – Applied Measurement in Education, 2012
Item-level information, such as difficulty and discrimination are invaluable to the test assembly, equating, and scoring practices. Estimating these parameters within the context of large-scale performance assessments is often hindered by the use of unbalanced designs for assigning examinees to tasks and raters because such designs result in very…
Descriptors: Performance Based Assessment, Medicine, Factor Analysis, Test Items
Wan, Lei; Henly, George A. – Applied Measurement in Education, 2012
Many innovative item formats have been proposed over the past decade, but little empirical research has been conducted on their measurement properties. This study examines the reliability, efficiency, and construct validity of two innovative item formats--the figural response (FR) and constructed response (CR) formats used in a K-12 computerized…
Descriptors: Test Items, Test Format, Computer Assisted Testing, Measurement
Randall, Jennifer; Engelhard, George, Jr. – Applied Measurement in Education, 2010
The psychometric properties and multigroup measurement invariance of scores across subgroups, items, and persons on the "Reading for Meaning" items from the Georgia Criterion Referenced Competency Test (CRCT) were assessed in a sample of 778 seventh-grade students. Specifically, we sought to determine the extent to which score-based…
Descriptors: Testing Accommodations, Test Items, Learning Disabilities, Factor Analysis
Cook, Linda; Eignor, Daniel; Sawaki, Yasuyo; Steinberg, Jonathan; Cline, Frederick – Applied Measurement in Education, 2010
This study compared the underlying factors measured by a state standards-based grade 4 English-Language Arts (ELA) assessment given to several groups of students. The focus of the research was to gather evidence regarding whether or not the tests measured the same construct or constructs for students without disabilities who took the test under…
Descriptors: Language Arts, Educational Assessment, Grade 4, State Standards
Willse, John T.; Goodman, Joshua T.; Allen, Nancy; Klaric, John – Applied Measurement in Education, 2008
The current research demonstrates the effectiveness of using structural equation modeling (SEM) for the investigation of subgroup differences with sparse data designs where not every student takes every item. Simulations were conducted that reflected missing data structures like those encountered in large survey assessment programs (e.g., National…
Descriptors: Structural Equation Models, Simulation, Item Response Theory, Factor Analysis
Finch, Holmes; Stage, Alan Kirk; Monahan, Patrick – Applied Measurement in Education, 2008
A primary assumption underlying several of the common methods for modeling item response data is unidimensionality, that is, test items tap into only one latent trait. This assumption can be assessed several ways, using nonlinear factor analysis and DETECT, a method based on the item conditional covariances. When multidimensionality is identified,…
Descriptors: Test Items, Factor Analysis, Item Response Theory, Comparative Analysis
Pomplun, Mark – Applied Measurement in Education, 2007
This study investigated the usefulness of the bifactor model in the investigation of score equivalence from computerized and paper-and-pencil formats of the same reading tests. Concerns about the equivalence of the paper-and-pencil and computerized formats were warranted because of the use of reading passages, computer unfamiliarity of primary…
Descriptors: Models, Reading Tests, Equated Scores, Computer Assisted Testing
Finch, Holmes; Monahan, Patrick – Applied Measurement in Education, 2008
This article introduces a bootstrap generalization to the Modified Parallel Analysis (MPA) method of test dimensionality assessment using factor analysis. This methodology, based on the use of Marginal Maximum Likelihood nonlinear factor analysis, provides for the calculation of a test statistic based on a parametric bootstrap using the MPA…
Descriptors: Monte Carlo Methods, Factor Analysis, Generalization, Methods
Previous Page | Next Page ยป
Pages: 1 | 2