Publication Date
In 2025 | 1 |
Since 2024 | 5 |
Since 2021 (last 5 years) | 10 |
Since 2016 (last 10 years) | 22 |
Since 2006 (last 20 years) | 44 |
Descriptor
Error of Measurement | 49 |
Item Response Theory | 49 |
Psychometrics | 49 |
Test Items | 19 |
Simulation | 14 |
Scores | 13 |
Models | 12 |
Evaluation Methods | 10 |
Goodness of Fit | 10 |
Computation | 8 |
Measures (Individuals) | 8 |
More ▼ |
Source
Author
Chun Wang | 3 |
Ferrando, Pere J. | 2 |
Gongjun Xu | 2 |
Jiao, Hong | 2 |
Paek, Insu | 2 |
Schoen, Robert C. | 2 |
Yang, Xiaotong | 2 |
Abbott, Rosemary A. | 1 |
Ailin Yuan | 1 |
Allen, Diane D. | 1 |
Andersson, Björn | 1 |
More ▼ |
Publication Type
Journal Articles | 37 |
Reports - Research | 35 |
Reports - Evaluative | 6 |
Reports - Descriptive | 5 |
Dissertations/Theses -… | 4 |
Numerical/Quantitative Data | 2 |
Education Level
Audience
Researchers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Behavioral Risk Factor… | 1 |
Big Five Inventory | 1 |
National Assessment of… | 1 |
Trends in International… | 1 |
Work Keys (ACT) | 1 |
What Works Clearinghouse Rating
Hung-Yu Huang – Educational and Psychological Measurement, 2025
The use of discrete categorical formats to assess psychological traits has a long-standing tradition that is deeply embedded in item response theory models. The increasing prevalence and endorsement of computer- or web-based testing has led to greater focus on continuous response formats, which offer numerous advantages in both respondent…
Descriptors: Response Style (Tests), Psychological Characteristics, Item Response Theory, Test Reliability
Han, Yuting; Zhang, Jihong; Jiang, Zhehan; Shi, Dexin – Educational and Psychological Measurement, 2023
In the literature of modern psychometric modeling, mostly related to item response theory (IRT), the fit of model is evaluated through known indices, such as X[superscript 2], M2, and root mean square error of approximation (RMSEA) for absolute assessments as well as Akaike information criterion (AIC), consistent AIC (CAIC), and Bayesian…
Descriptors: Goodness of Fit, Psychometrics, Error of Measurement, Item Response Theory
Weicong Lyu; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Data harmonization is an emerging approach to strategically combining data from multiple independent studies, enabling addressing new research questions that are not answerable by a single contributing study. A fundamental psychometric challenge for data harmonization is to create commensurate measures for the constructs of interest across…
Descriptors: Data Analysis, Test Items, Psychometrics, Item Response Theory
Stefanie A. Wind; Yangmeng Xu – Educational Assessment, 2024
We explored three approaches to resolving or re-scoring constructed-response items in mixed-format assessments: rater agreement, person fit, and targeted double scoring (TDS). We used a simulation study to consider how the three approaches impact the psychometric properties of student achievement estimates, with an emphasis on person fit. We found…
Descriptors: Interrater Reliability, Error of Measurement, Evaluation Methods, Examiners
Turner, Kyle T.; Engelhard, George, Jr. – Measurement: Interdisciplinary Research and Perspectives, 2023
The purpose of this study is to illustrate the use of functional data analysis (FDA) as a general methodology for analyzing person response functions (PRFs). Applications of FDA to psychometrics have included the estimation of item response functions and latent distributions, as well as differential item functioning. Although FDA has been…
Descriptors: Data Analysis, Item Response Theory, Psychometrics, Statistical Distributions
Xue Zhang; Chun Wang – Grantee Submission, 2022
Item-level fit analysis not only serves as a complementary check to global fit analysis, it is also essential in scale development because the fit results will guide item revision and/or deletion (Liu & Maydeu-Olivares, 2014). During data collection, missing response data may likely happen due to various reasons. Chi-square-based item fit…
Descriptors: Goodness of Fit, Item Response Theory, Scores, Test Length
Chen, Chia-Wen; Andersson, Björn; Zhu, Jinxin – Journal of Educational Measurement, 2023
The certainty of response index (CRI) measures respondents' confidence level when answering an item. In conjunction with the answers to the items, previous studies have used descriptive statistics and arbitrary thresholds to identify student knowledge profiles with the CRIs. Whereas this approach overlooked the measurement error of the observed…
Descriptors: Item Response Theory, Factor Analysis, Psychometrics, Test Items
Balbuena, Sherwin E.; Maligalig, Dalisay S.; Quimbo, Maria Ana T. – Online Submission, 2021
The University Student Depression Inventory (USDI; Khawaja and Bryden 2006) is a 30- item scale that is used to measure depressive symptoms among university students. Its psychometric properties have been widely investigated under the classical test theory (CTT). This study explored the application of the polytomous Rasch partial credit model…
Descriptors: Item Response Theory, Likert Scales, College Students, Depression (Psychology)
Lee, Won-Chan; Kim, Stella Y.; Choi, Jiwon; Kang, Yujin – Journal of Educational Measurement, 2020
This article considers psychometric properties of composite raw scores and transformed scale scores on mixed-format tests that consist of a mixture of multiple-choice and free-response items. Test scores on several mixed-format tests are evaluated with respect to conditional and overall standard errors of measurement, score reliability, and…
Descriptors: Raw Scores, Item Response Theory, Test Format, Multiple Choice Tests
Kopp, Jason P.; Jones, Andrew T. – Applied Measurement in Education, 2020
Traditional psychometric guidelines suggest that at least several hundred respondents are needed to obtain accurate parameter estimates under the Rasch model. However, recent research indicates that Rasch equating results in accurate parameter estimates with sample sizes as small as 25. Item parameter drift under the Rasch model has been…
Descriptors: Item Response Theory, Psychometrics, Sample Size, Sampling
Factor Structure and Psychometric Properties of the Digital Stress Scale in a Chinese College Sample
Chunlei Gao; Mingqing Jian; Ailin Yuan – SAGE Open, 2024
The Digital Stress Scale (DSS) is used to measure digital stress, which is the perceived stress and anxiety associated with social media use. In this study, the Chinese version of the DSS was validated using a sample of 721 Chinese college students, 321 males and 400 females (KMO = 0.923; Bartlett = 5,058.492, p < 0.001). Confirmatory factor…
Descriptors: Factor Structure, Factor Analysis, Psychometrics, Anxiety
Sahin, Melek Gulsah – International Journal of Assessment Tools in Education, 2020
Computer Adaptive Multistage Testing (ca-MST), which take the advantage of computer technology and adaptive test form, are widely used, and are now a popular issue of assessment and evaluation. This study aims at analyzing the effect of different panel designs, module lengths, and different sequence of a parameter value across stages and change in…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Item Response Theory
Chengyu Cui; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Multidimensional item response theory (MIRT) models have generated increasing interest in the psychometrics literature. Efficient approaches for estimating MIRT models with dichotomous responses have been developed, but constructing an equally efficient and robust algorithm for polytomous models has received limited attention. To address this gap,…
Descriptors: Item Response Theory, Accuracy, Simulation, Psychometrics
Dimitrov, Dimiter M. – Educational and Psychological Measurement, 2020
This study presents new models for item response functions (IRFs) in the framework of the D-scoring method (DSM) that is gaining attention in the field of educational and psychological measurement and largescale assessments. In a previous work on DSM, the IRFs of binary items were estimated using a logistic regression model (LRM). However, the LRM…
Descriptors: Item Response Theory, Scoring, True Scores, Scaling
Oranje, Andreas; Kolstad, Andrew – Journal of Educational and Behavioral Statistics, 2019
The design and psychometric methodology of the National Assessment of Educational Progress (NAEP) is constantly evolving to meet the changing interests and demands stemming from a rapidly shifting educational landscape. NAEP has been built on strong research foundations that include conducting extensive evaluations and comparisons before new…
Descriptors: National Competency Tests, Psychometrics, Statistical Analysis, Computation