Learning, Evolution, and Bayesian Estimation in Games and Dynamic Choice Models
dc.contributor.advisor | Kolpin, Van | en_US |
dc.contributor.author | Monte Calvo, Alexander | en_US |
dc.date.accessioned | 2014-09-29T17:43:12Z | |
dc.date.available | 2014-09-29T17:43:12Z | |
dc.date.issued | 2014-09-29 | |
dc.description.abstract | This dissertation explores the modeling and estimation of learning in strategic and individual choice settings. While learning has been extensively used in economics, I introduce the concept into standard models in unorthodox ways. In each case, changing the perspective of what learning is drastically changes standard models. Estimation proceeds using advanced Bayesian techniques which perform very well in simulated data. The first chapter proposes a framework called Experienced-Based Ability (EBA) in which players increase the payoffs of a particular strategy in the future through using the strategy today. This framework is then introduced into a model of differentiated duopoly in which firms can utilize price or quantity contracts, and I explore how the resulting equilibrium is affected by changes in model parameters. The second chapter extends the EBA model into an evolutionary setting. This new model offers a simple and intuitive way to theoretically explain complicated dynamics. Moreover, this chapter demonstrates how to estimate posterior distributions of the model's parameters using a particle filter and Metropolis-Hastings algorithm, a technique that can also be used in estimating standard evolutionary models. This allows researchers to recover estimates of unobserved fitness and skill across time while only observing population share data. The third chapter investigates individual learning in a dynamic discrete choice setting. This chapter relaxes the assumption that individuals base decisions off an optimal policy and investigates the importance of policy learning. Q-learning is proposed as a model of individual choice when optimal policies are unknown, and I demonstrate how it can be used in the estimation of dynamic discrete choice (DDC) models. Using Bayesian Markov chain Monte Carlo techniques on simulated data, I show that the Q-learning model performs well at recovering true parameter values and thus functions as an alternative structural DDC model for researchers who want to move away from the rationality assumption. In addition, the simulated data are used to illustrate possible issues with standard structural estimation if the rationality assumption is incorrect. Lastly, using marginal likelihood analysis, I demonstrate that the Q-learning model can be used to test for the significance of learning effects if this is a concern. | en_US |
dc.identifier.uri | https://hdl.handle.net/1794/18341 | |
dc.language.iso | en_US | en_US |
dc.publisher | University of Oregon | en_US |
dc.rights | All Rights Reserved. | en_US |
dc.subject | Bayesian Estimation | en_US |
dc.subject | Evolution | en_US |
dc.subject | Learning | en_US |
dc.subject | Q-Learning | en_US |
dc.title | Learning, Evolution, and Bayesian Estimation in Games and Dynamic Choice Models | en_US |
dc.type | Electronic Thesis or Dissertation | en_US |
thesis.degree.discipline | Department of Economics | en_US |
thesis.degree.grantor | University of Oregon | en_US |
thesis.degree.level | doctoral | en_US |
thesis.degree.name | Ph.D. | en_US |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- MonteCalvo_oregon_0171A_10946.pdf
- Size:
- 1.23 MB
- Format:
- Adobe Portable Document Format