Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 3 |
Descriptor
Animals | 3 |
Behavior Patterns | 3 |
Models | 3 |
Decision Making | 2 |
Learning Processes | 2 |
Memory | 2 |
Rewards | 2 |
Task Analysis | 2 |
Anxiety | 1 |
Bayesian Statistics | 1 |
Brain Hemisphere Functions | 1 |
More ▼ |
Source
Learning & Memory | 3 |
Author
Cartoni, Emilio | 1 |
Diegelmann, Sören | 1 |
Fendt, Marcus | 1 |
Friston, Karl | 1 |
Gerber, Bertram | 1 |
Huh, Namjung | 1 |
Jo, Suhyun | 1 |
Jung, Min Whan | 1 |
Kim, Hoseok | 1 |
Pauli, Paul | 1 |
Pezzulo, Giovanni | 1 |
More ▼ |
Publication Type
Journal Articles | 3 |
Reports - Research | 3 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Pezzulo, Giovanni; Cartoni, Emilio; Rigoli, Francesco; io-Lopez, Léo; Friston, Karl – Learning & Memory, 2016
Balancing habitual and deliberate forms of choice entails a comparison of their respective merits--the former being faster but inflexible, and the latter slower but more versatile. Here, we show that arbitration between these two forms of control can be derived from first principles within an Active Inference scheme. We illustrate our arguments…
Descriptors: Interference (Learning), Epistemology, Physiology, Neurology
Gerber, Bertram; Yarali, Ayse; Diegelmann, Sören; Wotjak, Carsten T.; Pauli, Paul; Fendt, Marcus – Learning & Memory, 2014
Memories relating to a painful, negative event are adaptive and can be stored for a lifetime to support preemptive avoidance, escape, or attack behavior. However, under unfavorable circumstances such memories can become overwhelmingly powerful. They may trigger excessively negative psychological states and uncontrollable avoidance of locations,…
Descriptors: Pain, Learning Processes, Memory, Emotional Disturbances
Huh, Namjung; Jo, Suhyun; Kim, Hoseok; Sul, Jung Hoon; Jung, Min Whan – Learning & Memory, 2009
Reinforcement learning theories postulate that actions are chosen to maximize a long-term sum of positive outcomes based on value functions, which are subjective estimates of future rewards. In simple reinforcement learning algorithms, value functions are updated only by trial-and-error, whereas they are updated according to the decision-maker's…
Descriptors: Learning Theories, Animals, Rewards, Probability