On gradient-based methods for finding game-theoretic equilibria
Statistical decisions are often given meaning in the context of other decisions, particularly when there are scarce resources to be shared. Michael Jordan details the aim to blend gradient-based methodology with game-theoretic goals as part of a large "microeconomics meets machine learning" program.
Talk Title | On gradient-based methods for finding game-theoretic equilibria |
Speakers | Michael Jordan (UC Berkeley) |
Conference | O’Reilly Artificial Intelligence Conference |
Conf Tag | Put AI to Work |
Location | San Jose, California |
Date | September 10-12, 2019 |
URL | Talk Page |
Slides | Talk Slides |
Video | Talk Video |
Statistical decisions are often given meaning in the context of other decisions, particularly when there are scarce resources to be shared. The aim is to blend gradient-based methodology with game-theoretic goals as part of a large “microeconomics meets machine learning” program. Michael Jordan details several recent results, including how to define local optimality in nonconvex-nonconcave minimax optimization and how such a definition relates to stochastic gradient methods; a gradient-based algorithm that finds Nash equilibria, and only Nash equilibria; and exploration-exploitation trade-offs for bandits involving competition over a scarce resource.