ijcai-99 Home Page

D3 - Learning Bayesian Networks from Data

Monday, AM

Nir Friedman & Moises Goldszmidt

Bayesian networks are compact and computationally efficient representations of probability distributions. Over the last decade, they have become the method of choice for the representation of uncertainty in artificial intelligence. Today, they play a crucial role in modern expert systems, diagnosis engines, and decision support systems.

In recent years, there has been significant progress in methods and algorithms for inducing Bayesian networks directly from data. Learning these particular models is desirable for several reasons. First, there is a wide array of off-the-shelf tools that can apply the learned models for prediction, decision making and diagnosis. Second, learning Bayesian networks also provides a principled approach for semi-parametric density estimation, data analysis, pattern classification, and modeling. Third, in some situations they allow us to provide causal interpretation of the observed data. Fourth, they allow us to combine knowledge acquired from experts with information from raw data.

In this tutorial we will start by reviewing the basic concepts behind Bayesian networks. We will then describe the fundamental theory and algorithms for inducing these networks from data including learning the parameters and the structure of the network, how to handle missing values and hidden variables, and how to learn causal models. Finally, we will discuss advanced methods, open research areas, and applications of these learning methods, including pattern matching and classification, speech recognition, data analysis, and scientific discovery.

Prerequisite knowledge:
This tutorial is intended for people interested in data analysis, data mining, pattern recognition, machine learning and reasoning under uncertainty. Familiarity with the basic concepts of probability theory will be helpful.

Nir Friedman received a Ph.D. in computer science from Stanford in 1997, was a postdoctoral scholar in the Computer Science Division at the University of California, Berkeley till late 1998, and is currently a faculty member in the Institute of Computer Science at the Hebrew University, Jerusalem. In recent years, he has been extensively working on inference, planning, and learning with probabilistic representations of uncertainty. This work mainly focuses on using Bayesian networks for concept learning, data mining, reinforcement learning, and more recently computational biology.

Moises Goldszmidt is a senior computer scientist at SRI International, where he conducts research and directs several projects in the area of learning and adaptive systems. From 1992-1996 he was a research scientist with the Rockwell Science Center in Palo Alto. He received a PhD in Computer Science from the University of California, Los Angeles in 1992. Dr. Goldszmidt has numerous publications on topics related to representation and reasoning under uncertainty, automatic induction of Bayesian networks, decision making, and nonmonotonic reasoning.


Webmaster: Sven Olofsson, sveno@dsv.su.se
Last modified: Mar 14, 1999