Main Article Content
In this paper, we elucidate the equivalence between inference in game theory and machine learning. Our aim in so doing is to establish an equivalent vocabulary between the two domains so as to facilitate developments at the intersection of both ﬁelds, and as proof of the usefulness of this approach, we use recent developments in each ﬁeld to make useful improvements to the other. More speciﬁcally, we consider the analogies between smooth best responses in ﬁctitious play and Bayesian inference methods. Initially, we use these insights to develop and demonstrate an improved algorithm for learning in games based on probabilistic moderation. That is, by integrating over the distribution of opponent strategies (a Bayesian approach within machine learning) rather than taking a simple empirical average (the approach used in standard ﬁctitious play) we derive a novel moderated ﬁctitious play algorithm and show that it is more likely than standard ﬁctitious play to converge to a payoff-dominant but risk-dominated Nash equilibrium in a simple coordination game. Furthermore we consider the converse case, and show how insights from game theory can be used to derive two improved mean ﬁeld variational learning algorithms. We ﬁrst show that the standard update rule of mean ﬁeld variational learning is analogous to a Cournot adjustment within game theory. By analogy with ﬁctitious play, we then suggest an improved update rule, and show that this results in ﬁctitious variational play, an improved mean ﬁeld variational learning algorithm that exhibits better convergence in highly or strongly connected graphical models. Second, we use a recent advance in ﬁctitious play, namely dynamic ﬁctitious play, to derive a derivative action variational learning algorithm, that exhibits superior convergence properties on a canonical machine learning problem (clustering a mixture distribution).