Theoretical Issues in Probabilistic Artificial Intelligence

Michael Kearns
AT&T Labs

In the last decade or so, many of the central problems of "classical" artificial intelligence --- such as learning, planning and logical inference --- have been reformulated in frameworks with statistical or probabilistic underpinnings. The benefits of this trend include the adoption of a common set of mathematical tools for the various AI subdisciplines, increased attention on central algorithmic issues, an emphasis on approximation algorithms for some notoriously hard AI problems... and excellent opportunities for researchers from the theoretical computer science community to contribute to and influence AI.

In this tutorial, I will survey the probabilistic frameworks and the central computational problems posed in several well-developed areas of AI. I will describe some of the algorithms proposed for these problems, overview what is formally known about them (and also what is suspected but not proven), and try to give a flavor of the mathematical techniques involved. The tutorial will be self-contained, designed to be accessible to anyone in the STOC/FOCS community, with an emphasis on the interesting open problems. Likely topics include:

Graphical Models and Probabilistic Inference: graph-theoretic representations of distributions; the junction tree algorithm for exact inference in sparse networks; variational methods from statistical physics for approximate inference in dense networks.

Markov Decision Processes for Planning and Reinforcement Learning: planning in MDP's via value iteration, linear programming, and sparse sampling; the Q-learning algorithm; the exploration-exploitation trade-off and mixing times.

Supervised Learning: basics and recent advances in boosting; maximum margin analyses of generalization error.