Learning, Evolution, and Bayesian Estimation in Games and Dynamic Choice Models

dc.contributor.advisorKolpin, Vanen_US
dc.contributor.authorMonte Calvo, Alexanderen_US
dc.date.accessioned2014-09-29T17:43:12Z
dc.date.available2014-09-29T17:43:12Z
dc.date.issued2014-09-29
dc.description.abstractThis dissertation explores the modeling and estimation of learning in strategic and individual choice settings. While learning has been extensively used in economics, I introduce the concept into standard models in unorthodox ways. In each case, changing the perspective of what learning is drastically changes standard models. Estimation proceeds using advanced Bayesian techniques which perform very well in simulated data. The first chapter proposes a framework called Experienced-Based Ability (EBA) in which players increase the payoffs of a particular strategy in the future through using the strategy today. This framework is then introduced into a model of differentiated duopoly in which firms can utilize price or quantity contracts, and I explore how the resulting equilibrium is affected by changes in model parameters. The second chapter extends the EBA model into an evolutionary setting. This new model offers a simple and intuitive way to theoretically explain complicated dynamics. Moreover, this chapter demonstrates how to estimate posterior distributions of the model's parameters using a particle filter and Metropolis-Hastings algorithm, a technique that can also be used in estimating standard evolutionary models. This allows researchers to recover estimates of unobserved fitness and skill across time while only observing population share data. The third chapter investigates individual learning in a dynamic discrete choice setting. This chapter relaxes the assumption that individuals base decisions off an optimal policy and investigates the importance of policy learning. Q-learning is proposed as a model of individual choice when optimal policies are unknown, and I demonstrate how it can be used in the estimation of dynamic discrete choice (DDC) models. Using Bayesian Markov chain Monte Carlo techniques on simulated data, I show that the Q-learning model performs well at recovering true parameter values and thus functions as an alternative structural DDC model for researchers who want to move away from the rationality assumption. In addition, the simulated data are used to illustrate possible issues with standard structural estimation if the rationality assumption is incorrect. Lastly, using marginal likelihood analysis, I demonstrate that the Q-learning model can be used to test for the significance of learning effects if this is a concern.en_US
dc.identifier.urihttps://hdl.handle.net/1794/18341
dc.language.isoen_USen_US
dc.publisherUniversity of Oregonen_US
dc.rightsAll Rights Reserved.en_US
dc.subjectBayesian Estimationen_US
dc.subjectEvolutionen_US
dc.subjectLearningen_US
dc.subjectQ-Learningen_US
dc.titleLearning, Evolution, and Bayesian Estimation in Games and Dynamic Choice Modelsen_US
dc.typeElectronic Thesis or Dissertationen_US
thesis.degree.disciplineDepartment of Economicsen_US
thesis.degree.grantorUniversity of Oregonen_US
thesis.degree.leveldoctoralen_US
thesis.degree.namePh.D.en_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
MonteCalvo_oregon_0171A_10946.pdf
Size:
1.23 MB
Format:
Adobe Portable Document Format