Inferences of this type are easy for people but hard for robots. Part of the problem is that the modeling languages used in AI do not deal with uncertainty in a natural way. Logical languages, for example, do not handle uncertainty at all, while probabilistic languages deal with uncertainty at a precision and cost that is seldom needed.
Default languages are a new type of modeling languages that aim to fill the gap that exists between logical and probabilistic languages, providing modelers with the means to map soft inputs into soft outputs in a meaningful and principled way. Default models combine the convenience of logical languages, the flexibility and clarity of a probabilistic semantics, and the transparency of argumentation algorithms. The goal of the tutorial is to provide a coherent and self-contained survey of such work.
We view default reasoning in two ways: as an extended form of deductive inference and as a qualitative form of probabilistic inference. In each case, we lay out the main concepts, intuitions and algorithms. We then consider the specific problems that arise when reasoning about causality and time, and analyze what works, what doesn't work, and why. We make use of the basic ideas that underlie two probabilistic models: Bayesian Networks and Markov Processes. This allows us to shed light on a number of issues like the distinction between laws and facts, the role of causality, and the conditions for efficient reasoning.
We also illustrate the use of default languages for modeling in areas such as qualitative reasoning, decision making, and planning and control.