Publication Date
In 2025 | 0 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 8 |
Since 2016 (last 10 years) | 17 |
Since 2006 (last 20 years) | 32 |
Descriptor
Error of Measurement | 37 |
Psychometrics | 37 |
Test Items | 37 |
Item Response Theory | 19 |
Simulation | 10 |
Test Validity | 10 |
Difficulty Level | 9 |
Scores | 8 |
Scoring | 8 |
Test Reliability | 8 |
Goodness of Fit | 7 |
More ▼ |
Source
Author
Chun Wang | 2 |
Gongjun Xu | 2 |
Paek, Insu | 2 |
Schoen, Robert C. | 2 |
Smith, Richard M. | 2 |
Yang, Xiaotong | 2 |
Abedi, Jamal | 1 |
Aimé, Annie | 1 |
AlGhamdi, Hannan M. | 1 |
Allen, Patricia J. | 1 |
Alonzo, Julie | 1 |
More ▼ |
Publication Type
Reports - Research | 28 |
Journal Articles | 23 |
Reports - Descriptive | 6 |
Speeches/Meeting Papers | 4 |
Numerical/Quantitative Data | 3 |
Reports - Evaluative | 2 |
Dissertations/Theses -… | 1 |
Tests/Questionnaires | 1 |
Education Level
Elementary Education | 7 |
Elementary Secondary Education | 4 |
Junior High Schools | 3 |
Grade 3 | 2 |
Grade 4 | 2 |
Grade 7 | 2 |
Middle Schools | 2 |
Secondary Education | 2 |
Early Childhood Education | 1 |
Grade 1 | 1 |
Grade 2 | 1 |
More ▼ |
Audience
Researchers | 2 |
Location
New Mexico | 2 |
Canada | 1 |
France | 1 |
Georgia | 1 |
Germany | 1 |
Malaysia | 1 |
Netherlands | 1 |
Philippines | 1 |
Saudi Arabia | 1 |
South Africa | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Weicong Lyu; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Data harmonization is an emerging approach to strategically combining data from multiple independent studies, enabling addressing new research questions that are not answerable by a single contributing study. A fundamental psychometric challenge for data harmonization is to create commensurate measures for the constructs of interest across…
Descriptors: Data Analysis, Test Items, Psychometrics, Item Response Theory
Cooperman, Allison W.; Weiss, David J.; Wang, Chun – Educational and Psychological Measurement, 2022
Adaptive measurement of change (AMC) is a psychometric method for measuring intra-individual change on one or more latent traits across testing occasions. Three hypothesis tests--a Z test, likelihood ratio test, and score ratio index--have demonstrated desirable statistical properties in this context, including low false positive rates and high…
Descriptors: Error of Measurement, Psychometrics, Hypothesis Testing, Simulation
Karina Mostert; Clarisse van Rensburg; Reitumetse Machaba – Journal of Applied Research in Higher Education, 2024
Purpose: This study examined the psychometric properties of intention to drop out and study satisfaction measures for first-year South African students. The factorial validity, item bias, measurement invariance and reliability were tested. Design/methodology/approach: A cross-sectional design was used. For the study on intention to drop out, 1,820…
Descriptors: Intention, Potential Dropouts, Student Satisfaction, Test Items
Zhong Jian Chee; Anke M. Scheeren; Marieke de Vries – Autism: The International Journal of Research and Practice, 2024
Despite several psychometric advantages over the 50-item Autism Spectrum Quotient, an instrument used to measure autistic traits, the abridged AQ-28 and its cross-cultural validity have not been examined as extensively. Therefore, this study aimed to examine the factor structure and measurement invariance of the AQ-28 in 818 Dutch (M[subscript…
Descriptors: Autism Spectrum Disorders, Questionnaires, Factor Structure, Factor Analysis
Chen, Chia-Wen; Andersson, Björn; Zhu, Jinxin – Journal of Educational Measurement, 2023
The certainty of response index (CRI) measures respondents' confidence level when answering an item. In conjunction with the answers to the items, previous studies have used descriptive statistics and arbitrary thresholds to identify student knowledge profiles with the CRIs. Whereas this approach overlooked the measurement error of the observed…
Descriptors: Item Response Theory, Factor Analysis, Psychometrics, Test Items
Kopp, Jason P.; Jones, Andrew T. – Applied Measurement in Education, 2020
Traditional psychometric guidelines suggest that at least several hundred respondents are needed to obtain accurate parameter estimates under the Rasch model. However, recent research indicates that Rasch equating results in accurate parameter estimates with sample sizes as small as 25. Item parameter drift under the Rasch model has been…
Descriptors: Item Response Theory, Psychometrics, Sample Size, Sampling
Sahin, Melek Gulsah – International Journal of Assessment Tools in Education, 2020
Computer Adaptive Multistage Testing (ca-MST), which take the advantage of computer technology and adaptive test form, are widely used, and are now a popular issue of assessment and evaluation. This study aims at analyzing the effect of different panel designs, module lengths, and different sequence of a parameter value across stages and change in…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Item Response Theory
Maïano, Christophe; Thibault, Isabelle; Dreiskämper, Dennis; Henning, Lena; Tietjens, Maike; Aimé, Annie – Measurement in Physical Education and Exercise Science, 2023
The present study sought to examine the psychometric properties of the French and German versions of the Physical Self-Concept Questionnaire for Elementary School Children-Revised (PSCQ-C-R). A sample of 519 children participated in this study. Of those, 197 were French-Canadian and 322 were German. Results support the factor validity and…
Descriptors: Elementary School Students, Self Concept, Human Body, Questionnaires
Chengyu Cui; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Multidimensional item response theory (MIRT) models have generated increasing interest in the psychometrics literature. Efficient approaches for estimating MIRT models with dichotomous responses have been developed, but constructing an equally efficient and robust algorithm for polytomous models has received limited attention. To address this gap,…
Descriptors: Item Response Theory, Accuracy, Simulation, Psychometrics
Dimitrov, Dimiter M. – Educational and Psychological Measurement, 2020
This study presents new models for item response functions (IRFs) in the framework of the D-scoring method (DSM) that is gaining attention in the field of educational and psychological measurement and largescale assessments. In a previous work on DSM, the IRFs of binary items were estimated using a logistic regression model (LRM). However, the LRM…
Descriptors: Item Response Theory, Scoring, True Scores, Scaling
Tsaousis, Ioannis; Sideridis, Georgios D.; AlGhamdi, Hannan M. – Journal of Psychoeducational Assessment, 2021
This study evaluated the psychometric quality of a computerized adaptive testing (CAT) version of the general cognitive ability test (GCAT), using a simulation study protocol put forth by Han, K. T. (2018a). For the needs of the analysis, three different sets of items were generated, providing an item pool of 165 items. Before evaluating the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Cognitive Tests, Cognitive Ability
Oranje, Andreas; Kolstad, Andrew – Journal of Educational and Behavioral Statistics, 2019
The design and psychometric methodology of the National Assessment of Educational Progress (NAEP) is constantly evolving to meet the changing interests and demands stemming from a rapidly shifting educational landscape. NAEP has been built on strong research foundations that include conducting extensive evaluations and comparisons before new…
Descriptors: National Competency Tests, Psychometrics, Statistical Analysis, Computation
Magnus, Brooke E.; Thissen, David – Journal of Educational and Behavioral Statistics, 2017
Questionnaires that include items eliciting count responses are becoming increasingly common in psychology. This study proposes methodological techniques to overcome some of the challenges associated with analyzing multivariate item response data that exhibit zero inflation, maximum inflation, and heaping at preferred digits. The modeling…
Descriptors: Item Response Theory, Models, Multivariate Analysis, Questionnaires
Schoen, Robert C.; Yang, Xiaotong; Paek, Insu – Grantee Submission, 2018
This report provides evidence of the substantive and structural validity of the Knowledge for Teaching Elementary Fractions Test. Field-test data were gathered with a sample of 241 elementary educators, including teachers, administrators, and instructional support personnel, in spring 2017, as part of a larger study involving a multisite…
Descriptors: Psychometrics, Pedagogical Content Knowledge, Mathematics Tests, Mathematics Instruction
Noam, Gil G.; Allen, Patricia J.; Sonnert, Gerhard; Sadler, Philip M. – International Journal of Science Education, Part B: Communication and Public Engagement, 2020
There has been a growing need felt by practitioners, researchers, and evaluators to obtain a common measure of science engagement that can be used in different out-of-school time (OST) science learning settings. We report on the development and validation of a novel 10-item self-report instrument designed to measure, communicate, and ultimately…
Descriptors: Leisure Time, Elementary School Students, Middle School Students, After School Programs