Robots are moving into our everyday life for tasks like entertainment,
cleaning, and delivery. To arrive at such systems, a number of key
scientific questions must be answered and technological breakthroughs
must be accomplished. The areas of service robotics and the RoboCup
each define common tasks that allow evaluation of systems promoting
integration of robotics and AI. In this talk the application domains
are introduced, recent results are reviewed, and issues for future
generations are outlined.
Model-based diagnosis techniques have started to enter industrial
applications and commercial tools. We focus on pointing out the
reasons behind these successes, in terms of both technical solutions
and industrial needs. The lessons learned and open problems hampering
wider application suggest future theoretical and practical research.
While the study of machine intelligence has focused on the programming
of general-purpose computers, digital logic represents a small subset
of the latent capability of natural systems to manipulate
information. I present some of the remarkable computational tasks that
can be performed by the evolution of simple classical and quantum
systems, and consider the implications for inference and interfaces of
bringing rich sensory information into more conventional computing
environments.
Simultaneously with the information "explosion" in the last few
decades there has been a corresponding "explosion" in higher education
in most countries. This growth in number of students has essentially
followed an "extrapolation" of traditional teaching modes. There
have, however, been a number of attempts to apply modern electronic
tools to promote a change described as "from teaching to learning".
In a joint effort Stanford University and selected Swedish
universities are promoting a shift towards learning through Learning
Laboratories. The talk will illustrate some basic ideas and concepts
behind this collaboration and the Learning Laboratories.
For two decades, Bayesian networks constructed by experts have been
used in intelligent systems with a fair amount of success. More
recently, researchers have developed techniques for constructing
Bayesian networks (both parameters and structure) from a combination
of expert knowledge and data. These techniques can significantly
reduce the cost of building an intelligent system and can be used to
identify causal relationships from non-experimental data - an
important breakthrough for science. I will describe some of these
techniques, concentrating on methods borrowed from Bayesian
statistics, and discuss real-world applications.
The optimization methods of operations research and the constraint
satisfaction methods of artificial intelligence have a unifying theme:
both fields exploit the fundamental and related dualities of search
vs. inference and strengthening vs. relaxation. This allows the two
fields to be seen as special cases of a more general approach and
suggests new methods that fit into neither OR nor AI.
The representation of motion is of central importance in many
artificial intelligence-related fields such as robotics, computer
graphics, virtual reality, neurophysiology, and so forth. A crucial
and not yet completely understood issue is, however, the measurement
of motion. Computer vision has proposed a paradigm called "dynamic
vision". Within this paradigm, the vast majority of solutions consider
a single camera. In this talk we advocate that a pair of uncalibrated
cameras should be preferred. The motion measurement and
representation issued from such a camera pair are more tractable from
a mathematical point of view and can be used in a wider range of
applications, such as visual guidance of robots and vehicles, visual
surveillance, and virtualized reality.
The rational approach to pharmaceutical drug design begins with an
investigation of the relationship between chemical structure and
biological activity. Information gained from this analysis is used to
aid the design of new or improved drugs. Computational chemists
involved in rational drug design routinely use an array of programs to
compute geometric and chemical characteristics of molecules. In this
talk I describe areas of computer-aided drug design that are important
to computational chemists but are also rich in algorithmic problems
and have attracted the attention of computer scientists.
Boosting is a general method for producing a very accurate
classification rule by combining rough and moderately inaccurate
"rules of thumb". While rooted in a theoretical framework of machine
learning, boosting has been found empirically to perform rather
well. In this talk, I will introduce the Boosting algorithm AdaBoost
and explain the underlying theory of boosting, including an
explanation of why boosting often does not suffer from overfitting. I
also will describe some recent applications of boosting.
This talk presents Multilingual Natural Language Generation (M-NLG),
which is proving successful in its attempts to achieve the same goals
as machine translation (the more familiar alternative technology for
automating multilingual document production) while avoiding many of
its pitfalls.
Language processing has a large practical potential when we realize
that, for instance, it can be integrated with other modalities made
available by a computer. Intelligent interfaces are artifacts that
(often) practically embody these concepts. Some prototypes are
presented and challenges for the future are discussed.
Moshe Tennenholtz, the Technion Israel Institute of Technology
|
Mechanism design is the branch of economics and game theory that deals
with the design of economic settings and protocols. In this talk we
review some of the mechanism design literature and discuss some
essential steps in the adaptation of economic mechanisms to
non-cooperative computational environments, such as the Internet.