21st Conference on
Uncertainty in Artificial Intelligence
|Decision Theory Without
Larry Blume, Cornell University
|In almost all current approaches to decision making, it is assumed that a decision maker (DM) starts with a set of states and set of outcomes, and chooses among a rather rich set of acts, which are functions from states to outcomes. However, most interesting decision problems do not come with a state space and an outcome space. Indeed, in complex problems it is often far from clear what the state space and outcome space would be. An alternate foundation for decision making is suggested, where acts are programs. Programs can be given semantics as functions from states to outcomes. A representation theorem is proved that generalizes standard representation theorems in the literature, showing that if the DM's preference order on acts (programs) satisfies appropriate postulates, then there exist a set S of states, a set O of outcomes, a way of viewing programs as functions from S to O, a probability on S, and a utility function on O, such that the DM prefers program a to program b iff the expected utility of a is higher than that of b. Thus, the state space and outcome space are subjective, just like the probability and utility; they are not part of the description of the problem. A number of benefits of this generalization are discussed.|
|Larry Blume is a Professor of Economics at Cornell University. He received his PhD from UC Berkeley in 1977. Before joining the Economics Department at Cornell University he spent ten years at the University of Michigan. He has written extensively on game theory, general equilibrium theory and decision theory.|
Algorithms in Structural Molecular Biology and Proteomics
Bruce R. Donald, Dartmouth
| Some of the most challenging and influential
opportunities for UAI techniques arise in developing and applying
information technology to understand the molecular machinery of the cell.
Our recent work (and work by others) shows that many UAI algorithms may be
fruitfully applied to the challenges of computational molecular biology.
UAI research may lead to computer systems and algorithms that are useful
in structural molecular biology, proteomics, and rational drug design.
Concomitantly, a wealth of interesting computational problems arise in proposed methods for discovering new pharmaceuticals. In this talk, I'll discuss some recent results from my lab, including new algorithms for interpreting X-ray crystallography and NMR (nuclear magnetic resonance) data, disease classification using mass spectrometry of human serum, and protein redesign. Our algorithms have recently been used, respectively, to reveal the enzymatic architecture of organisms high on the CDC bioterrorism watch-list, for probabilistic cancer classification from human peripheral blood, and to redesign an antibiotic-producing enzyme to bind a novel substrate. I'll overview these projects, highlighting instances of the general problem of recovering structure from uncertain data. For example, in our work on computational methods for NMR structural biology, automated resonance assignment is formulated as an expectation/maximization geometric matching problem under noise, and high-throughput detection of protein structural homology by NMR can be viewed as a combinatorial optimization to minimize KL-distance on a Lie group.
|Bruce Donald is the Joan P. and Edward
J. Foley Jr 1933 Professor in the Computer Science Department at
Dartmouth. He holds a joint appointment in the Department of Chemistry and
the Department of Biological Sciences. From 1987-1998, Donald was a
professor in the the Cornell University Computer Science Department, with
a joint appointment in Applied Mathematics. He received a B.A. from Yale
University, and a Ph.D. from MIT. Donald has worked in research, visiting,
and faculty positions at Harvard, Stanford, Interval Research Corporation,
Donald has been a National Science Foundation Presidential Young Investigator. He has worked in in several research areas, including Robotics, Microelectromechanical Systems (MEMS), Computational Biology, Graphics, and Geometric Algorithms. Donald's latest research interest is in computational structural biology and drug design. He was awarded a Guggenheim Fellowship for his work on algorithms for structural proteomics. Research in the Donald laboratory is funded by the National Institutes of Health under the auspices of the National Institute of General Medical Sciences Protein Structure Initiative.
|A Walk on Mars: Managing
Uncertainty Through Model-based Programming
Brian C. Williams, MIT
The vision of Mars rovers, pursued for the last two decades, has seemingly become all but routine, through the success of Spirit and Opportunity. As a new vision, in another two decades we could see Mars airplanes scouting Valle Marineris for sites of ancient water springs, followed by quadruped or biped robots, which deftly traverse this “grand canyon” to the sites of scientific interest. Yet the loss or close calls experienced by virtually every Mars mission, underscore the daunting challenge of this new, bi-decadal challenge. Achieving this and similar robotic challenges on Earth, requires a new paradigm for programming robotic systems, one that elevates the programmer above the level of micro-managing responses to uncertain outcomes.
This talk introduces a new approach, called Model-based Programming, in which a human programs individual or robot teams at a qualitative level, in terms of the robots’ intended state evolutions. This program is then executed robustly, while adapting to uncertainty and failure, by continuously reasoning from models of the robots and their environment. The models used by the executive, called probabilistic constraint automata, combine Markov processes, mixed discrete/continuous constraints and hierarchical, timed automata, in order to represent complex, concurrent, stochastic processes, while the reasoning algorithms underlying the executive efficiently solve hybrid logic/optimization problems, through forward, conflict-directed search. In this talk, we develop the concepts underlying model-based programming, using three examples related to the Mars exploration challenge, a self diagnosing explorer that reasons about its mixed software/hardware subsystems, a team of cooperating airplanes, and a biped robot that is adept at navigating rough terrain.
|Brian Williams's research
concentrates on model-based autonomy – the creation of long-lived
autonomous systems that are able to robustly explore, while commanding,
diagnosing and repairing themselves, using fast, commonsense reasoning.
Current research focuses on model-based programming, cooperative robotics
and optimal reasoning: model-based programming embeds commonsense within
robots and everyday devices, by incorporating model-based deductive
capabilities within traditional embedded programming languages;
cooperative robotics extends model-based autonomy to robotic networks of
land, air and space vehicles, and optimal reasoning uses conflict
learning, structural decomposition and symbolic encodings, to quickly
reason about what is likely and what is best.
Brian Williams received his SB, SM and PhD from MIT in Computer Science and Electrical Engineering in 1989. Starting in the 80’s, he pioneered multiple fault, model-based diagnosis and repair at Xerox PARC and NASA through the GDE, Sherlock, Livingstone and Burton systems. At the NASA Ames Research Center he formed the Autonomous Systems and Robotics Branch, co-invented the Remote Agent, model-based autonomous system, and was a member of the NASA Deep Space One Probe flight team, which took Remote Agent to flight in 1999.
He has received three best paper and one distinguished paper prize, for research on qualitative algebras, propositional inference, model-based monitoring and soft constraints. He has served on the editorial boards of JAIR, the journal of field robotics and AAAI Press, in addition to acting as a guest editor of AIJ. He has received several NASA Space Act Awards; was a member of the Tom Young’s Blue Ribbon Team n 2000, assessing future Mars missions in light of the Mars Climate Orbiter and Polar Lander, and is a Member of the Advisory Council of the NASA Jet Propulsion Laboratory.
|Models and Games for All
the World's Information
Peter Norvig, Google
|Internet search engines have accumulated more information in one place than the world has ever seen before. Unfortunately, along with the facts they also have accumulated fiction, deception, and nonsense. This talk considers how to make sense of a 100TB corpus of uncertain information. From linguistics and information retrieval we take language models that mediate between query and response, and from game theory we take mechanism design that provides the incentives for webmasters, advertisers, searchers and search engines to co-evolve. Finally, from computer science we take the systems design that make it possible to operate such a large-scale system.|
|Peter Norvig is the Director of Search
Quality at Google Inc.. He is a Fellow and Councilor of the American
Association for Artificial Intelligence and co-author of Artificial
Intelligence: A Modern Approach, the leading textbook in the field.
Previously he was head of the Computational Sciences Division at NASA Ames Research Center, where he oversaw a staff of 200 scientists performing NASA's research and development in autonomy and robotics, automated software engineering and data analysis, neuro-engineering, collaborative systems research, and simulation-based decision-making. Before that he was Chief Scientist at Junglee, where he helped develop one of the first Internet comparison shopping service; Chief designer at Harlequin Inc; and Senior Scientist at Sun Microsystems Laboratories.
Dr. Norvig received a B.S. in Applied Mathematics from Brown University and a Ph.D. in Computer Science from the University of California at Berkeley. He has been a Professor at the University of Southern California and a Research Faculty Member at Berkeley. He has over fifty publications in various areas of Computer Science, concentrating on Artificial Intelligence, Natural Language Processing and Software Engineering including the books Paradigms of AI Programming: Case Studies in Common Lisp, Verbmobil: A Translation System for Face-to-Face Dialog, and Intelligent Help Systems for UNIX.
David MacKay, University of Cambridge
|Keyboards are inefficient for two reasons: they do not exploit the predictability of normal language; and they waste the fine analogue capabilities of the user's muscles. I describe a system intended to rectify both these inefficiencies. Dasher is a communication system in which a language model plays an integral role, and it's driven by continuous gestures. Single-finger writing speeds exceeding 35 words per minute can be achieved. Hands-free writing is also possible, at speeds up to 25 words per minute.|
|David MacKay is a Professor in the Department of Physics at Cambridge University. He obtained his PhD in Computation and Neural Systems at the California Institute of Technology. His interests include machine learning, reliable computation with unreliable hardware, the design and decoding of error correcting codes, and the creation of information-efficient human-computer interfaces.|