Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 3 |
Descriptor
Animals | 3 |
Decision Making | 3 |
Models | 3 |
Task Analysis | 3 |
Behavior Patterns | 2 |
Rewards | 2 |
Bayesian Statistics | 1 |
Brain Hemisphere Functions | 1 |
Cognitive Processes | 1 |
Correlation | 1 |
Epistemology | 1 |
More ▼ |
Source
Learning & Memory | 3 |
Author
Basile, Benjamin M. | 1 |
Cartoni, Emilio | 1 |
Friston, Karl | 1 |
Hampton, Robert R. | 1 |
Huh, Namjung | 1 |
Jo, Suhyun | 1 |
Jung, Min Whan | 1 |
Kim, Hoseok | 1 |
Pezzulo, Giovanni | 1 |
Rigoli, Francesco | 1 |
Sul, Jung Hoon | 1 |
More ▼ |
Publication Type
Journal Articles | 3 |
Reports - Research | 3 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Pezzulo, Giovanni; Cartoni, Emilio; Rigoli, Francesco; io-Lopez, Léo; Friston, Karl – Learning & Memory, 2016
Balancing habitual and deliberate forms of choice entails a comparison of their respective merits--the former being faster but inflexible, and the latter slower but more versatile. Here, we show that arbitration between these two forms of control can be derived from first principles within an Active Inference scheme. We illustrate our arguments…
Descriptors: Interference (Learning), Epistemology, Physiology, Neurology
Basile, Benjamin M.; Hampton, Robert R. – Learning & Memory, 2013
One influential model of recognition posits two underlying memory processes: recollection, which is detailed but relatively slow, and familiarity, which is quick but lacks detail. Most of the evidence for this dual-process model in nonhumans has come from analyses of receiver operating characteristic (ROC) curves in rats, but whether ROC analyses…
Descriptors: Animals, Recognition (Psychology), Cognitive Processes, Familiarity
Huh, Namjung; Jo, Suhyun; Kim, Hoseok; Sul, Jung Hoon; Jung, Min Whan – Learning & Memory, 2009
Reinforcement learning theories postulate that actions are chosen to maximize a long-term sum of positive outcomes based on value functions, which are subjective estimates of future rewards. In simple reinforcement learning algorithms, value functions are updated only by trial-and-error, whereas they are updated according to the decision-maker's…
Descriptors: Learning Theories, Animals, Rewards, Probability