NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…44
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 44 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ken A. Fujimoto; Carl F. Falk – Educational and Psychological Measurement, 2024
Item response theory (IRT) models are often compared with respect to predictive performance to determine the dimensionality of rating scale data. However, such model comparisons could be biased toward nested-dimensionality IRT models (e.g., the bifactor model) when comparing those models with non-nested-dimensionality IRT models (e.g., a…
Descriptors: Item Response Theory, Rating Scales, Predictive Measurement, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Beauducel, André; Hilger, Norbert – Educational and Psychological Measurement, 2022
In the context of Bayesian factor analysis, it is possible to compute plausible values, which might be used as covariates or predictors or to provide individual scores for the Bayesian latent variables. Previous simulation studies ascertained the validity of mean plausible values by the mean squared difference of the mean plausible values and the…
Descriptors: Bayesian Statistics, Factor Analysis, Prediction, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Gonzalez, Oscar – Educational and Psychological Measurement, 2023
When scores are used to make decisions about respondents, it is of interest to estimate classification accuracy (CA), the probability of making a correct decision, and classification consistency (CC), the probability of making the same decision across two parallel administrations of the measure. Model-based estimates of CA and CC computed from the…
Descriptors: Classification, Accuracy, Intervals, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Tenko Raykov; Christine DiStefano; Lisa Calvocoressi – Educational and Psychological Measurement, 2024
This note demonstrates that the widely used Bayesian Information Criterion (BIC) need not be generally viewed as a routinely dependable index for model selection when the bifactor and second-order factor models are examined as rival means for data description and explanation. To this end, we use an empirically relevant setting with…
Descriptors: Bayesian Statistics, Models, Decision Making, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
James Ohisei Uanhoro – Educational and Psychological Measurement, 2024
Accounting for model misspecification in Bayesian structural equation models is an active area of research. We present a uniquely Bayesian approach to misspecification that models the degree of misspecification as a parameter--a parameter akin to the correlation root mean squared residual. The misspecification parameter can be interpreted on its…
Descriptors: Bayesian Statistics, Structural Equation Models, Simulation, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Thompson, Yutian T.; Song, Hairong; Shi, Dexin; Liu, Zhengkui – Educational and Psychological Measurement, 2021
Conventional approaches for selecting a reference indicator (RI) could lead to misleading results in testing for measurement invariance (MI). Several newer quantitative methods have been available for more rigorous RI selection. However, it is still unknown how well these methods perform in terms of correctly identifying a truly invariant item to…
Descriptors: Measurement, Statistical Analysis, Selection, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Hung-Yu – Educational and Psychological Measurement, 2023
The forced-choice (FC) item formats used for noncognitive tests typically develop a set of response options that measure different traits and instruct respondents to make judgments among these options in terms of their preference to control the response biases that are commonly observed in normative tests. Diagnostic classification models (DCMs)…
Descriptors: Test Items, Classification, Bayesian Statistics, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Liang, Xinya – Educational and Psychological Measurement, 2020
Bayesian structural equation modeling (BSEM) is a flexible tool for the exploration and estimation of sparse factor loading structures; that is, most cross-loading entries are zero and only a few important cross-loadings are nonzero. The current investigation was focused on the BSEM with small-variance normal distribution priors (BSEM-N) for both…
Descriptors: Factor Structure, Bayesian Statistics, Structural Equation Models, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Lozano, José H.; Revuelta, Javier – Educational and Psychological Measurement, 2023
The present paper introduces a general multidimensional model to measure individual differences in learning within a single administration of a test. Learning is assumed to result from practicing the operations involved in solving the items. The model accounts for the possibility that the ability to learn may manifest differently for correct and…
Descriptors: Bayesian Statistics, Learning Processes, Test Items, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Fujimoto, Ken A. – Educational and Psychological Measurement, 2019
Advancements in item response theory (IRT) have led to models for dual dependence, which control for cluster and method effects during a psychometric analysis. Currently, however, this class of models does not include one that controls for when the method effects stem from two method sources in which one source functions differently across the…
Descriptors: Bayesian Statistics, Item Response Theory, Psychometrics, Models
Peer reviewed Peer reviewed
Direct linkDirect link
List, Marit Kristine; Köller, Olaf; Nagy, Gabriel – Educational and Psychological Measurement, 2019
Tests administered in studies of student achievement often have a certain amount of not-reached items (NRIs). The propensity for NRIs may depend on the proficiency measured by the test and on additional covariates. This article proposes a semiparametric model to study such relationships. Our model extends Glas and Pimentel's item response theory…
Descriptors: Educational Assessment, Item Response Theory, Multivariate Analysis, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
da Silva, Marcelo A.; Liu, Ren; Huggins-Manley, Anne C.; Bazán, Jorge L. – Educational and Psychological Measurement, 2019
Multidimensional item response theory (MIRT) models use data from individual item responses to estimate multiple latent traits of interest, making them useful in educational and psychological measurement, among other areas. When MIRT models are applied in practice, it is not uncommon to see that some items are designed to measure all latent traits…
Descriptors: Item Response Theory, Matrices, Models, Bayesian Statistics
Kara, Yusuf; Kamata, Akihito; Potgieter, Cornelis; Nese, Joseph F. T. – Educational and Psychological Measurement, 2020
Oral reading fluency (ORF), used by teachers and school districts across the country to screen and progress monitor at-risk readers, has been documented as a good indicator of reading comprehension and overall reading competence. In traditional ORF administration, students are given one minute to read a grade-level passage, after which the…
Descriptors: Oral Reading, Reading Fluency, Reading Rate, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Marmolejo-Ramos, Fernando; Cousineau, Denis – Educational and Psychological Measurement, 2017
The number of articles showing dissatisfaction with the null hypothesis statistical testing (NHST) framework has been progressively increasing over the years. Alternatives to NHST have been proposed and the Bayesian approach seems to have achieved the highest amount of visibility. In this last part of the special issue, a few alternative…
Descriptors: Hypothesis Testing, Bayesian Statistics, Evaluation Methods, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Yang, Yanyun; Xia, Yan – Educational and Psychological Measurement, 2019
When item scores are ordered categorical, categorical omega can be computed based on the parameter estimates from a factor analysis model using frequentist estimators such as diagonally weighted least squares. When the sample size is relatively small and thresholds are different across items, using diagonally weighted least squares can yield a…
Descriptors: Scores, Sample Size, Bayesian Statistics, Item Analysis
Previous Page | Next Page »
Pages: 1  |  2  |  3