NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
ERIC Number: ED667703
Record Type: Non-Journal
Publication Date: 2021
Pages: 266
Abstractor: As Provided
ISBN: 979-8-5346-9095-8
ISSN: N/A
EISSN: N/A
Available Date: 0000-00-00
Information Constraints in Decision-Making
Stefan Franz Bucher
ProQuest LLC, Ph.D. Dissertation, New York University
While the supply of information pertaining to many decisions is seemingly limitless, the capacity of the human mind to process it is not. In this dissertation, I study some of the implications of information processing constraints for human decision-making. Its three chapters span different levels, investigating the consequences of information processing constraints for markets, for choice behavior, and for neural representations implicated in choice. At the market level, Chapter 1 examines the consequences of information constraints in matching markets. The Deferred Acceptance Algorithm (DAA) is used in many matching markets, with school choice being a particularly prominent example. Its widespread use is partly justified by its attractive theoretical properties which rest on the empirically questionable assumption of perfect information. I study the performance of the DAA when resolving their preferences is costly for students, by introducing a matching model in which schools agree on their rankings of students, each student's prior is exchangeable across schools, and learning costs are linear in Shannon mutual information. I characterize the unique symmetric equilibrium outcome analytically for any number of schools and students, any exchangeable priors, and heterogeneous marginal costs of learning. I demonstrate how each student's rank, learning costs, and prior beliefs interact to determine students' learning strategies, choices, mistakes and resulting gross and net welfare. I find that the DAA may exacerbate inequity since lower-ranked students have generally lower incentives to acquire information. I show how policies that lower the costs of learning may reduce such inequities, and point to their limitations. At the level of behavior, Chapter 2 investigates experimentally the role of prior beliefs when subjects dynamically acquire information. Drift-diffusion models have been widely used as descriptions of the choice process. The abundant behavioral evidence they capture, however, consists mainly of choice error rates and response times, which provide only limited identification of the underlying choice process. In order to examine when and how a prior modulates the choice process during evidence accumulation, I designed a psychophysical experiment that can reveal the evolving decision variable. Using a previously studied static perceptual choice task, I find that the log-odds of choice evolve precisely affine linearly in log-time. The dependence of choices on the prior gradually decreases towards zero as time-in-trial evolves, as predicted by a Bayesian model with biased starting point. However, I find that given sufficient time, conditional choice probabilities do not depend on the prior. This suggests that rather than being modulated by the prior like a posterior belief, the evolving decision variable in human choosers appears to capture only the likelihood of the accumulated evidence. This accumulated evidence is only combined with the prior once evidence accumulation has stopped. This interpretation, which implies that subjects accumulate the same amount of information regardless of their prior, is also supported by the observation that subjects' decision confidence does not take the prior into account even when choices do so, suggesting that the prior is maintained by a cognitive process separate from evidence accumulation that subjects use only when evidence is scarce. At the neural level, Chapter 3 states the conditions under which an important computation in the brain can be viewed as informationally efficient. Divisive normalization is a canonical computation in the brain, observed across sensory domains as well as in choice, that is often considered to be an implementation of the "efficient coding principle." Formally relating this neural computation to stimulus distributions remains an open issue. I provide a theoretical result that makes the conditions under which divisive normalization is efficient analytically precise: I show that, in a low-noise environment, encoding an "n"-dimensional stimulus with divisive normalization is efficient "if and only if" its prevalence in the environment is described by a multivariate Pareto distribution. I extend this multivariate analogue of histogram equalization to allow for a general metabolic cost of the representation, and show how different assumptions on costs are associated with different shapes of the distributions that divisive normalization encodes efficiently. My result suggests that divisive normalization may have evolved to efficiently represent stimuli with Pareto distributions, consistent with empirical observations on naturalistic stimulus distributions such as the conditional variance dependence of natural images. My theoretical result also yields empirically testable predictions across sensory domains on how the divisive normalization parameters should be tuned to features of the input distribution, and how these concepts can be incorporated into theories of decision-making. [The dissertation citations contained here are published with the permission of ProQuest LLC. Further reproduction is prohibited without permission. Copies of dissertations may be obtained by Telephone (800) 1-800-521-0600. Web page: http://www.proquest.com/en-US/products/dissertations/individuals.shtml.]
ProQuest LLC. 789 East Eisenhower Parkway, P.O. Box 1346, Ann Arbor, MI 48106. Tel: 800-521-0600; Web site: http://www.proquest.com/en-US/products/dissertations/individuals.shtml
Publication Type: Dissertations/Theses - Doctoral Dissertations
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A
Author Affiliations: N/A