Abstract
While payoff-based learning models are almost exclusively devised for finite action games, where players can test every action, it is harder to design such learning processes for continuous games. We construct a stochastic learning rule, designed for games with continuous action sets, which requires no sophistication from the players and is simple to implement: players update their actions according to variations in own payoff between current and previous action. We then analyze its behavior in several classes of continuous games and show that convergence to a stable Nash equilibrium is guaranteed in all games with strategic complements as well as in concave games, while convergence to Nash equilibrium occurs in all locally ordinal potential games as soon as Nash equilibria are isolated.
Original language | English |
---|---|
Pages (from-to) | 1471-1508 |
Number of pages | 38 |
Journal | Theoretical Economics |
Volume | 15 |
Issue number | 4 |
DOIs | |
State | Published - Nov 2020 |
Externally published | Yes |
Keywords
- C6
- C72
- D83
- Payoff-based learning
- continuous games
- stochastic approximation