NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Assessments and Surveys
Raven Progressive Matrices1
What Works Clearinghouse Rating
Showing 1 to 15 of 59 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Xiangyi Liao; Daniel M Bolt – Educational Measurement: Issues and Practice, 2024
Traditional approaches to the modeling of multiple-choice item response data (e.g., 3PL, 4PL models) emphasize slips and guesses as random events. In this paper, an item response model is presented that characterizes both disjunctively interacting guessing and conjunctively interacting slipping processes as proficiency-related phenomena. We show…
Descriptors: Item Response Theory, Test Items, Error Correction, Guessing (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Rios, Joseph A. – Applied Measurement in Education, 2022
Testing programs are confronted with the decision of whether to report individual scores for examinees that have engaged in rapid guessing (RG). As noted by the "Standards for Educational and Psychological Testing," this decision should be based on a documented criterion that determines score exclusion. To this end, a number of heuristic…
Descriptors: Testing, Guessing (Tests), Academic Ability, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Abu-Ghazalah, Rashid M.; Dubins, David N.; Poon, Gregory M. K. – Applied Measurement in Education, 2023
Multiple choice results are inherently probabilistic outcomes, as correct responses reflect a combination of knowledge and guessing, while incorrect responses additionally reflect blunder, a confidently committed mistake. To objectively resolve knowledge from responses in an MC test structure, we evaluated probabilistic models that explicitly…
Descriptors: Guessing (Tests), Multiple Choice Tests, Probability, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Lúcio, Patrícia Silva; Vandekerckhove, Joachim; Polanczyk, Guilherme V.; Cogo-Moreira, Hugo – Journal of Psychoeducational Assessment, 2021
The present study compares the fit of two- and three-parameter logistic (2PL and 3PL) models of item response theory in the performance of preschool children on the Raven's Colored Progressive Matrices. The test of Raven is widely used for evaluating nonverbal intelligence of factor g. Studies comparing models with real data are scarce on the…
Descriptors: Guessing (Tests), Item Response Theory, Test Validity, Preschool Children
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chu, Wei; Pavlik, Philip I., Jr. – International Educational Data Mining Society, 2023
In adaptive learning systems, various models are employed to obtain the optimal learning schedule and review for a specific learner. Models of learning are used to estimate the learner's current recall probability by incorporating features or predictors proposed by psychological theory or empirically relevant to learners' performance. Logistic…
Descriptors: Reaction Time, Accuracy, Models, Predictor Variables
Peer reviewed Peer reviewed
Direct linkDirect link
Starns, Jeffrey J.; Ma, Qiuli – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2018
The two-high-threshold (2HT) model of recognition memory assumes that people make memory errors because they fail to retrieve information from memory and make a guess, whereas the continuous unequal-variance (UV) model and the low-threshold (LT) model assume that people make memory errors because they retrieve misleading information from memory.…
Descriptors: Guessing (Tests), Recognition (Psychology), Memory, Tests
Falk, Carl F.; Cai, Li – Grantee Submission, 2016
We present a logistic function of a monotonic polynomial with a lower asymptote, allowing additional flexibility beyond the three-parameter logistic model. We develop a maximum marginal likelihood based approach to estimate the item parameters. The new item response model is demonstrated on math assessment data from a state, and a computationally…
Descriptors: Item Response Theory, Guessing (Tests), Mathematics Tests, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Westera, Wim – Education and Information Technologies, 2016
This paper is about performance assessment in serious games. We conceive serious gaming as a process of player-lead decision taking. Starting from combinatorics and item-response theory we provide an analytical model that makes explicit to what extent observed player performances (decisions) are blurred by chance processes (guessing behaviors). We…
Descriptors: Performance Based Assessment, Games, Item Response Theory, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Foley, Brett P. – Practical Assessment, Research & Evaluation, 2016
There is always a chance that examinees will answer multiple choice (MC) items correctly by guessing. Design choices in some modern exams have created situations where guessing at random through the full exam--rather than only for a subset of items where the examinee does not know the answer--can be an effective strategy to pass the exam. This…
Descriptors: Guessing (Tests), Multiple Choice Tests, Case Studies, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Sewell, David K.; Lilburn, Simon D.; Smith, Philip L. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2016
A central question in working memory research concerns the degree to which information in working memory is accessible to other cognitive processes (e.g., decision-making). Theories assuming that the focus of attention can only store a single object at a time require the focus to orient to a target representation before further processing can…
Descriptors: Short Term Memory, Visual Perception, Attention, Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
Falk, Carl F.; Cai, Li – Journal of Educational Measurement, 2016
We present a logistic function of a monotonic polynomial with a lower asymptote, allowing additional flexibility beyond the three-parameter logistic model. We develop a maximum marginal likelihood-based approach to estimate the item parameters. The new item response model is demonstrated on math assessment data from a state, and a computationally…
Descriptors: Item Response Theory, Guessing (Tests), Mathematics Tests, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Hung-Yu; Wang, Wen-Chung – Journal of Educational Measurement, 2014
The DINA (deterministic input, noisy, and gate) model has been widely used in cognitive diagnosis tests and in the process of test development. The outcomes known as slip and guess are included in the DINA model function representing the responses to the items. This study aimed to extend the DINA model by using the random-effect approach to allow…
Descriptors: Models, Guessing (Tests), Probability, Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Culpepper, Steven Andrew – Journal of Educational and Behavioral Statistics, 2015
A Bayesian model formulation of the deterministic inputs, noisy "and" gate (DINA) model is presented. Gibbs sampling is employed to simulate from the joint posterior distribution of item guessing and slipping parameters, subject attribute parameters, and latent class probabilities. The procedure extends concepts in Béguin and Glas,…
Descriptors: Bayesian Statistics, Models, Sampling, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Bayen, Ute J.; Kuhlmann, Beatrice G. – Journal of Memory and Language, 2011
The authors investigated conditions under which judgments in source-monitoring tasks are influenced by prior schematic knowledge. According to a probability-matching account of source guessing (Spaniol & Bayen, 2002), when people do not remember the source of information, they match source-guessing probabilities to the perceived contingency…
Descriptors: Schemata (Cognition), Guessing (Tests), Probability, Prior Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Obinne, A.D.E. – World Journal of Education, 2012
The 3-parameter model of Item Response Theory gives the probability of an individual (examinee) responding correctly to an item without being sure of all the facts. That is known as guessing. Guessing could be a strategy employed by examinees to earn more marks. The way an item is constructed could expose the item to guessing by the examinee. A…
Descriptors: Item Response Theory, Test Items, Guessing (Tests), Probability
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4