Calendrier du 15 avril 2019
Roy Seminar (ADRES)
Du 15/04/2019 de 17:00 à 18:30
Salle R1-09, Campus Jourdan, 48 boulevard Jourdan, 75014 Paris
RENY Philip (University of Chicago)
Conditional ?-Equilibria of Multi-Stage Games with Infinite Sets of Signals and Actions
We extend Kreps and Wilson's concept of sequential equilibrium to games where the sets of actions that players can choose from and the sets of signals that players may observe are infinite. A strategy profile is a conditional ?-equilibrium if, for any player and for any of his positive probability signal events, the player's conditional expected utility is within ? of the best that the player can achieve by deviating. Perfect conditional ?-equilibria are defined by testing conditional ?-rationality also under nets of small perturbations of the players' strategies and of nature's probability function that can make any finite collection of signals outside a negligible set have positive probability. Every perfect conditional ?-equilibrium strategy profile is a subgame perfect ?-equilibrium, and, in finite games, limits of perfect conditional ?-equilibria as ??0 are sequential equilibrium strategy profiles. Because such limit strategies need not exist even in very "nice" infinite games, we consider instead their limit distributions over outcomes. We call such outcome distributions perfect conditional equilibrium distributions and establish their existence for a large class of regular projective games. Nature's perturbations can produce equilibria that seem unintuitive and so we consider two ways to limit the effects of those perturbations, using topologies on nature's states and on players' actions.
Régulation et Environnement
Du 15/04/2019 de 12:00 à 13:00
salle R1-13, campus Jourdan - 75014 Paris
CALZOLARI Giacomo (European University Institute)
Artificial Intelligence, Algorithmic Pricing and Collusion
écrit avec Emilio Calvano,Vincenzo Denicolò and Sergio Pastorello
Pricing algorithms are increasingly replacing human decision making in real marketplaces. To inform the competition policy debate on possible consequences, we run experiments with pricing algorithms powered by Artificial Intelligence in controlled environments (computer simulations).
In particular, we study the interaction among a number of Q-learning algorithms in the context of a workhorse oligopoly model of price competition with Logit demand and constant marginal costs. We show that the algorithms consistently learn to charge supra-competitive prices, without communicating with each other. The high prices are sustained by classical collusive strategies with a finite punishment phase followed by a gradual return to cooperation. This finding is robust to asymmetries in cost or demand and to changes in the number of players.