Publication Date
In 2025 | 4 |
Since 2024 | 16 |
Since 2021 (last 5 years) | 74 |
Since 2016 (last 10 years) | 203 |
Since 2006 (last 20 years) | 414 |
Descriptor
Source
Author
de la Torre, Jimmy | 11 |
Sinharay, Sandip | 7 |
Jiao, Hong | 6 |
Johnson, Matthew S. | 6 |
Mislevy, Robert J. | 6 |
Wang, Wen-Chung | 6 |
Cohen, Allan S. | 5 |
Douglas, Jeffrey A. | 5 |
Fox, Jean-Paul | 5 |
Griffiths, Thomas L. | 5 |
Levy, Roy | 5 |
More ▼ |
Publication Type
Education Level
Audience
Researchers | 6 |
Teachers | 5 |
Practitioners | 2 |
Students | 2 |
Location
Australia | 11 |
Taiwan | 8 |
Germany | 7 |
Italy | 5 |
North Carolina | 5 |
Turkey | 5 |
United Kingdom | 5 |
China | 4 |
Florida | 4 |
Hong Kong | 4 |
Netherlands | 4 |
More ▼ |
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Meets WWC Standards without Reservations | 1 |
Meets WWC Standards with or without Reservations | 1 |

Shipley, Bill – Structural Equation Modeling, 2002
Describes a method for choosing rejection probabilities for the tests of independence that are used in constraint-based algorithms of exploratory path analysis. The method consists of generating a Markov or semi-Markov model from the equivalence class represented by a partial ancestral graph and then testing the d-separation implications. (SLD)
Descriptors: Algorithms, Markov Processes, Path Analysis

Danilowicz, Czeslaw; Balinski, Jaroslaw – Information Processing & Management, 2001
Considers how the order of documents in information retrieval responses are determined and introduces a method that uses a probabilistic model of a document set where documents are regarded as states of a Markov chain and where transition probabilities are directly proportional to similarities between documents. (Author/LRW)
Descriptors: Information Retrieval, Markov Processes, Models, Probability
Lockwood, J. R.; McCaffrey, Daniel F. – National Center on Performance Incentives, 2008
This paper develops a model for longitudinal student achievement data designed to estimate heterogeneity in teacher effects across students of different achievement levels. The model specifies interactions between teacher effects and students' predicted scores on a test, estimating both average effects of individual teachers and interaction terms…
Descriptors: Classes (Groups of Students), Computation, Academic Achievement, Longitudinal Studies
Briggs, Derek C.; Wilson, Mark – Journal of Educational Measurement, 2007
An approach called generalizability in item response modeling (GIRM) is introduced in this article. The GIRM approach essentially incorporates the sampling model of generalizability theory (GT) into the scaling model of item response theory (IRT) by making distributional assumptions about the relevant measurement facets. By specifying a random…
Descriptors: Markov Processes, Generalizability Theory, Item Response Theory, Computation
Griffiths, Thomas L.; Kalish, Michael L. – Cognitive Science, 2007
Languages are transmitted from person to person and generation to generation via a process of iterated learning: people learn a language from other people who once learned that language themselves. We analyze the consequences of iterated learning for learning algorithms based on the principles of Bayesian inference, assuming that learners compute…
Descriptors: Probability, Diachronic Linguistics, Statistical Inference, Language Universals

Mandys, Frantisek; Dolan, Conor V.; Molenaar, Peter C. M. – Journal of Educational and Behavioral Statistics, 1994
Studied the conditions under which the quasi-Markov simplex model fits a linear growth curve covariance structure and determined when the model is rejected. Presents a quasi-Markov simplex model with structured means and gives an example. (SLD)
Descriptors: Goodness of Fit, Markov Processes, Trend Analysis
Ching, Wai-Ki; Ng, Michael K. – International Journal of Mathematical Education in Science and Technology, 2004
Hidden Markov models (HMMs) are widely used in bioinformatics, speech recognition and many other areas. This note presents HMMs via the framework of classical Markov chain models. A simple example is given to illustrate the model. An estimation method for the transition probabilities of the hidden states is also discussed.
Descriptors: Markov Processes, Probability, Mathematical Models, Computation
Johnson, Matthew S.; Sinharay, Sandip – Applied Psychological Measurement, 2005
For complex educational assessments, there is an increasing use of item families, which are groups of related items. Calibration or scoring in an assessment involving item families requires models that can take into account the dependence structure inherent among the items that belong to the same item family. This article extends earlier works in…
Descriptors: National Competency Tests, Markov Processes, Bayesian Statistics
Li, Yanmei; Bolt, Daniel M.; Fu, Jianbin – Applied Psychological Measurement, 2006
When tests are made up of testlets, standard item response theory (IRT) models are often not appropriate due to the local dependence present among items within a common testlet. A testlet-based IRT model has recently been developed to model examinees' responses under such conditions (Bradlow, Wainer, & Wang, 1999). The Bradlow, Wainer, and…
Descriptors: Models, Markov Processes, Item Response Theory, Tests

Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun – Applied Psychological Measurement, 2002
Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)
Descriptors: Estimation (Mathematics), Markov Processes, Monte Carlo Methods, Simulation

Seltzer, Michael; Novak, John; Choi, Kilchan; Lim, Nelson – Journal of Educational and Behavioral Statistics, 2002
Examines the ways in which level-1 outliers can impact the estimation of fixed effects and random effects in hierarchical models (HMs). Also outlines and illustrates the use of Markov Chain Monte Carlo algorithms for conducting sensitivity analyses under "t" level-1 assumptions, including algorithms for settings in which the degrees of…
Descriptors: Algorithms, Estimation (Mathematics), Markov Processes, Monte Carlo Methods

Baker, Frank B. – Applied Psychological Measurement, 1998
Investigated the item-parameter recovery characteristics of a Gibbs sampling method (J. Albert, 1992) for item-response theory item-parameter estimation and compared them to those from the BILOG computer program. Item-parameter recoveries were similar for both approaches for larger data sets, but overall, BILOG performance was superior. (SLD)
Descriptors: Estimation (Mathematics), Item Response Theory, Markov Processes, Sampling
Laditka, James N.; Laditka, Sarah B.; Olatosi, Bankole; Elder, Keith T. – Journal of Rural Health, 2007
Context: Years lived with and without physical impairment are central measures of public health. Purpose: We sought to determine whether these measures differed between rural and urban residents who were impaired at the time of a baseline measurement. We examined 16 subgroups defined by rural/urban residence, gender, race, and education. Methods:…
Descriptors: Public Health, Markov Processes, Rural Areas, Older Adults
Stankiewicz, Brian J.; Legge, Gordon E.; Mansfield, J. Stephen; Schlicht, Erik J. – Journal of Experimental Psychology: Human Perception and Performance, 2006
The authors describe 3 human spatial navigation experiments that investigate how limitations of perception, memory, uncertainty, and decision strategy affect human spatial navigation performance. To better understand the effect of these variables on human navigation performance, the authors developed an ideal-navigator model for indoor navigation…
Descriptors: Spatial Ability, Visual Perception, Memory, Models
Kim, Jee-Seon; Bolt, Daniel M. – Educational Measurement: Issues and Practice, 2007
The purpose of this ITEMS module is to provide an introduction to Markov chain Monte Carlo (MCMC) estimation for item response models. A brief description of Bayesian inference is followed by an overview of the various facets of MCMC algorithms, including discussion of prior specification, sampling procedures, and methods for evaluating chain…
Descriptors: Placement, Monte Carlo Methods, Markov Processes, Measurement