Publication Date
In 2025 | 1 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 9 |
Since 2006 (last 20 years) | 24 |
Descriptor
Psychometrics | 28 |
Scores | 28 |
Simulation | 28 |
Item Response Theory | 13 |
Test Items | 11 |
Evaluation Methods | 8 |
Models | 7 |
Comparative Analysis | 6 |
Error of Measurement | 6 |
Goodness of Fit | 6 |
Test Bias | 5 |
More ▼ |
Source
Author
Publication Type
Journal Articles | 20 |
Reports - Research | 15 |
Reports - Evaluative | 8 |
Dissertations/Theses -… | 4 |
Speeches/Meeting Papers | 3 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 2 |
Postsecondary Education | 2 |
Two Year Colleges | 1 |
Audience
Researchers | 2 |
Location
Saudi Arabia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Cognitive Abilities Test | 1 |
Cognitive Assessment System | 1 |
Graduate Record Examinations | 1 |
Minnesota Multiphasic… | 1 |
Wechsler Individual… | 1 |
Wechsler Intelligence Scale… | 1 |
What Works Clearinghouse Rating
Chia-Lin Tsai; Stefanie Wind; Samantha Estrada – Measurement: Interdisciplinary Research and Perspectives, 2025
Researchers who work with ordinal rating scales sometimes encounter situations where the scale categories do not function in the intended or expected way. For example, participants' use of scale categories may result in an empirical difficulty ordering for the categories that does not match what was intended. Likewise, the level of distinction…
Descriptors: Rating Scales, Item Response Theory, Psychometrics, Self Efficacy
Xue Zhang; Chun Wang – Grantee Submission, 2022
Item-level fit analysis not only serves as a complementary check to global fit analysis, it is also essential in scale development because the fit results will guide item revision and/or deletion (Liu & Maydeu-Olivares, 2014). During data collection, missing response data may likely happen due to various reasons. Chi-square-based item fit…
Descriptors: Goodness of Fit, Item Response Theory, Scores, Test Length
Curran, Patrick J.; Georgeson, A. R.; Bauer, Daniel J.; Hussong, Andrea M. – International Journal of Behavioral Development, 2021
Conducting valid and reliable empirical research in the prevention sciences is an inherently difficult and challenging task. Chief among these is the need to obtain numerical scores of underlying theoretical constructs for use in subsequent analysis. This challenge is further exacerbated by the increasingly common need to consider multiple…
Descriptors: Psychometrics, Scoring, Prevention, Scores
Kopp, Jason P.; Jones, Andrew T. – Applied Measurement in Education, 2020
Traditional psychometric guidelines suggest that at least several hundred respondents are needed to obtain accurate parameter estimates under the Rasch model. However, recent research indicates that Rasch equating results in accurate parameter estimates with sample sizes as small as 25. Item parameter drift under the Rasch model has been…
Descriptors: Item Response Theory, Psychometrics, Sample Size, Sampling
Tsaousis, Ioannis; Sideridis, Georgios D.; AlGhamdi, Hannan M. – Journal of Psychoeducational Assessment, 2021
This study evaluated the psychometric quality of a computerized adaptive testing (CAT) version of the general cognitive ability test (GCAT), using a simulation study protocol put forth by Han, K. T. (2018a). For the needs of the analysis, three different sets of items were generated, providing an item pool of 165 items. Before evaluating the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Cognitive Tests, Cognitive Ability
Guo, Hongwen; Zu, Jiyun; Kyllonen, Patrick – ETS Research Report Series, 2018
For a multiple-choice test under development or redesign, it is important to choose the optimal number of options per item so that the test possesses the desired psychometric properties. On the basis of available data for a multiple-choice assessment with 8 options, we evaluated the effects of changing the number of options on test properties…
Descriptors: Multiple Choice Tests, Test Items, Simulation, Test Construction
Leventhal, Brian – ProQuest LLC, 2017
More robust and rigorous psychometric models, such as multidimensional Item Response Theory models, have been advocated for survey applications. However, item responses may be influenced by construct-irrelevant variance factors such as preferences for extreme response options. Through empirical and simulation methods, this study evaluates the use…
Descriptors: Psychometrics, Item Response Theory, Simulation, Models
Steinberg, Jonathan; Andrews-Todd, Jessica; Forsyth, Carolyn; Chamberlain, John; Horwitz, Paul; Koon, Al; Rupp, Andre; McCulla, Laura – ETS Research Report Series, 2020
This study discusses the development of a basic electronics knowledge (BEK) assessment as a pretest activity for undergraduate students in engineering and related fields. The 28 BEK items represent 12 key concepts, including properties of serial circuits, knowledge of electrical laws (e.g., Kirchhoff 's and Ohm's laws), and properties of digital…
Descriptors: Knowledge Level, Skill Development, Psychometrics, Student Evaluation
Stanley, Leanne M.; Edwards, Michael C. – Educational and Psychological Measurement, 2016
The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…
Descriptors: Test Reliability, Goodness of Fit, Scores, Patients
Shin, Hyo Jeong – ProQuest LLC, 2015
This dissertation is comprised of three papers that propose and apply psychometric models to deal with complexities and challenges in large-scale assessments, focusing on modeling rater effects and complex learning progressions. In particular, three papers investigate extensions and applications of multilevel and multidimensional item response…
Descriptors: Item Response Theory, Psychometrics, Models, Measurement
Hou, Likun; de la Torre, Jimmy; Nandakumar, Ratna – Journal of Educational Measurement, 2014
Analyzing examinees' responses using cognitive diagnostic models (CDMs) has the advantage of providing diagnostic information. To ensure the validity of the results from these models, differential item functioning (DIF) in CDMs needs to be investigated. In this article, the Wald test is proposed to examine DIF in the context of CDMs. This study…
Descriptors: Test Bias, Models, Simulation, Error Patterns
Tijmstra, Jesper; Hessen, David J.; van der Heijden, Peter G. M.; Sijtsma, Klaas – Psychometrika, 2013
Most dichotomous item response models share the assumption of latent monotonicity, which states that the probability of a positive response to an item is a nondecreasing function of a latent variable intended to be measured. Latent monotonicity cannot be evaluated directly, but it implies manifest monotonicity across a variety of observed scores,…
Descriptors: Item Response Theory, Statistical Inference, Probability, Psychometrics
Phillips, Shane Michael – ProQuest LLC, 2012
Propensity score matching is a relatively new technique used in observational studies to approximate data that have been randomly assigned to treatment. This technique assimilates the values of several covariates into a single propensity score that is used as a matching variable to create similar groups. This dissertation comprises two separate…
Descriptors: Statistical Analysis, Educational Research, Simulation, Observation
Gibson, David; Clarke-Midura, Jody – International Association for Development of the Information Society, 2013
The rise of digital game and simulation-based learning applications has led to new approaches in educational measurement that take account of patterns in time, high resolution paths of action, and clusters of virtual performance artifacts. The new approaches, which depart from traditional statistical analyses, include data mining, machine…
Descriptors: Psychometrics, Educational Games, Educational Research, Data Collection
Molenaar, Dylan; Dolan, Conor V.; de Boeck, Paul – Psychometrika, 2012
The Graded Response Model (GRM; Samejima, "Estimation of ability using a response pattern of graded scores," Psychometric Monograph No. 17, Richmond, VA: The Psychometric Society, 1969) can be derived by assuming a linear regression of a continuous variable, Z, on the trait, [theta], to underlie the ordinal item scores (Takane & de Leeuw in…
Descriptors: Simulation, Regression (Statistics), Psychometrics, Item Response Theory
Previous Page | Next Page ยป
Pages: 1 | 2