Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 11 |
Since 2006 (last 20 years) | 19 |
Descriptor
Bayesian Statistics | 20 |
Simulation | 20 |
Item Response Theory | 10 |
Sample Size | 6 |
Test Items | 6 |
Goodness of Fit | 5 |
Models | 5 |
Monte Carlo Methods | 5 |
Accuracy | 4 |
Data Analysis | 4 |
Evaluation Methods | 4 |
More ▼ |
Source
Educational and Psychological… | 20 |
Author
Fujimoto, Ken A. | 2 |
Huang, Hung-Yu | 2 |
Beauducel, André | 1 |
Beretvas, S. Natasha | 1 |
Chan, Darius K.-S. | 1 |
Cheung, Shu Fai | 1 |
Dardick, William R. | 1 |
Fang, Guoliang | 1 |
Glasnapp, Douglas R. | 1 |
Hayashi, Kentaro | 1 |
He, Wei | 1 |
More ▼ |
Publication Type
Journal Articles | 20 |
Reports - Research | 19 |
Reports - Evaluative | 1 |
Education Level
Early Childhood Education | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Preschool Education | 1 |
Secondary Education | 1 |
Audience
Location
Taiwan | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 1 |
Wechsler Adult Intelligence… | 1 |
What Works Clearinghouse Rating
James Ohisei Uanhoro – Educational and Psychological Measurement, 2024
Accounting for model misspecification in Bayesian structural equation models is an active area of research. We present a uniquely Bayesian approach to misspecification that models the degree of misspecification as a parameter--a parameter akin to the correlation root mean squared residual. The misspecification parameter can be interpreted on its…
Descriptors: Bayesian Statistics, Structural Equation Models, Simulation, Statistical Inference
Beauducel, André; Hilger, Norbert – Educational and Psychological Measurement, 2022
In the context of Bayesian factor analysis, it is possible to compute plausible values, which might be used as covariates or predictors or to provide individual scores for the Bayesian latent variables. Previous simulation studies ascertained the validity of mean plausible values by the mean squared difference of the mean plausible values and the…
Descriptors: Bayesian Statistics, Factor Analysis, Prediction, Simulation
Fujimoto, Ken A.; Neugebauer, Sabina R. – Educational and Psychological Measurement, 2020
Although item response theory (IRT) models such as the bifactor, two-tier, and between-item-dimensionality IRT models have been devised to confirm complex dimensional structures in educational and psychological data, they can be challenging to use in practice. The reason is that these models are multidimensional IRT (MIRT) models and thus are…
Descriptors: Bayesian Statistics, Item Response Theory, Sample Size, Factor Structure
Huang, Hung-Yu – Educational and Psychological Measurement, 2023
The forced-choice (FC) item formats used for noncognitive tests typically develop a set of response options that measure different traits and instruct respondents to make judgments among these options in terms of their preference to control the response biases that are commonly observed in normative tests. Diagnostic classification models (DCMs)…
Descriptors: Test Items, Classification, Bayesian Statistics, Decision Making
Fujimoto, Ken A. – Educational and Psychological Measurement, 2019
Advancements in item response theory (IRT) have led to models for dual dependence, which control for cluster and method effects during a psychometric analysis. Currently, however, this class of models does not include one that controls for when the method effects stem from two method sources in which one source functions differently across the…
Descriptors: Bayesian Statistics, Item Response Theory, Psychometrics, Models
Liang, Xinya; Kamata, Akihito; Li, Ji – Educational and Psychological Measurement, 2020
One important issue in Bayesian estimation is the determination of an effective informative prior. In hierarchical Bayes models, the uncertainty of hyperparameters in a prior can be further modeled via their own priors, namely, hyper priors. This study introduces a framework to construct hyper priors for both the mean and the variance…
Descriptors: Bayesian Statistics, Randomized Controlled Trials, Effect Size, Sampling
Himelfarb, Igor; Marcoulides, Katerina M.; Fang, Guoliang; Shotts, Bruce L. – Educational and Psychological Measurement, 2020
The chiropractic clinical competency examination uses groups of items that are integrated by a common case vignette. The nature of the vignette items violates the assumption of local independence for items nested within a vignette. This study examines via simulation a new algorithmic approach for addressing the local independence violation problem…
Descriptors: Allied Health Occupations Education, Allied Health Personnel, Competence, Tests
Lee, HyeSun; Smith, Weldon Z. – Educational and Psychological Measurement, 2020
Based on the framework of testlet models, the current study suggests the Bayesian random block item response theory (BRB IRT) model to fit forced-choice formats where an item block is composed of three or more items. To account for local dependence among items within a block, the BRB IRT model incorporated a random block effect into the response…
Descriptors: Bayesian Statistics, Item Response Theory, Monte Carlo Methods, Test Format
Hoofs, Huub; van de Schoot, Rens; Jansen, Nicole W. H.; Kant, IJmert – Educational and Psychological Measurement, 2018
Bayesian confirmatory factor analysis (CFA) offers an alternative to frequentist CFA based on, for example, maximum likelihood estimation for the assessment of reliability and validity of educational and psychological measures. For increasing sample sizes, however, the applicability of current fit statistics evaluating model fit within Bayesian…
Descriptors: Goodness of Fit, Bayesian Statistics, Factor Analysis, Sample Size
Dardick, William R.; Mislevy, Robert J. – Educational and Psychological Measurement, 2016
A new variant of the iterative "data = fit + residual" data-analytical approach described by Mosteller and Tukey is proposed and implemented in the context of item response theory psychometric models. Posterior probabilities from a Bayesian mixture model of a Rasch item response theory model and an unscalable latent class are expressed…
Descriptors: Bayesian Statistics, Probability, Data Analysis, Item Response Theory
Park, Jungkyu; Yu, Hsiu-Ting – Educational and Psychological Measurement, 2016
The multilevel latent class model (MLCM) is a multilevel extension of a latent class model (LCM) that is used to analyze nested structure data structure. The nonparametric version of an MLCM assumes a discrete latent variable at a higher-level nesting structure to account for the dependency among observations nested within a higher-level unit. In…
Descriptors: Hierarchical Linear Modeling, Nonparametric Statistics, Data Analysis, Simulation
Liu, Min; Lin, Tsung-I – Educational and Psychological Measurement, 2014
A challenge associated with traditional mixture regression models (MRMs), which rest on the assumption of normally distributed errors, is determining the number of unobserved groups. Specifically, even slight deviations from normality can lead to the detection of spurious classes. The current work aims to (a) examine how sensitive the commonly…
Descriptors: Regression (Statistics), Evaluation Methods, Indexes, Models
Lee, HwaYoung; Beretvas, S. Natasha – Educational and Psychological Measurement, 2014
Conventional differential item functioning (DIF) detection methods (e.g., the Mantel-Haenszel test) can be used to detect DIF only across observed groups, such as gender or ethnicity. However, research has found that DIF is not typically fully explained by an observed variable. True sources of DIF may include unobserved, latent variables, such as…
Descriptors: Item Analysis, Factor Structure, Bayesian Statistics, Goodness of Fit
Zhu, Xiaowen; Stone, Clement A. – Educational and Psychological Measurement, 2012
This study examined the relative effectiveness of Bayesian model comparison methods in selecting an appropriate graded response (GR) model for performance assessment applications. Three popular methods were considered: deviance information criterion (DIC), conditional predictive ordinate (CPO), and posterior predictive model checking (PPMC). Using…
Descriptors: Bayesian Statistics, Item Response Theory, Comparative Analysis, Models
He, Wei; Wolfe, Edward W. – Educational and Psychological Measurement, 2012
In administration of individually administered intelligence tests, items are commonly presented in a sequence of increasing difficulty, and test administration is terminated after a predetermined number of incorrect answers. This practice produces stochastically censored data, a form of nonignorable missing data. By manipulating four factors…
Descriptors: Individual Testing, Intelligence Tests, Test Items, Test Length
Previous Page | Next Page »
Pages: 1 | 2