Modern Computer Vision, Augmented Reality, and Interactive Modeling Chris Brown University of Rochester In an augmented reality display, computer graphics are appropriately mixed with live video or superimposed on a live view. One of the crucial problems is how to register the graphics with the view of the real world. Most popular are approaches that rely on calibration and subsequent accurate measurement of geometry of interest -- for instance the position and orientation of the live video camera. This approach uses the familiar Euclidean frame, which has many conceptual and computational advantages. However, in some situations calibration is a very difficult problem. Two techniques of current interest in the computer vision community are non-Euclidian (affine or projective) object representations and real-time feature tracking. Together, they allow the registration problem to be solved without prior calibration, and without a known calibration object in the scene. The theory is illustrated with current work involving augmented reality overlays, animations, and visual input of solid models. ---------------------------------- Christopher Brown (BA Oberlin 1967, PhD U. Chicago 1972) is Professor of Computer Science at the University of Rochester, his home since finishing a postdoctoral fellowship at the School of Artificial Intelligence at the University of Edinburgh in 1974. He is coauthor of COMPUTER VISION with his Rochester colleague Dana Ballard. He spent two years heading the software development of the PADL-2 solid modeling package from the Production Automation Project at Rochester. His current research interests are computer vision and robotics, especially the interaction of visual and cognitive capabilities and motor behavior. This interest leads to involvement in real-time operating and control systems, real-time vision algorithms, and in the basic connections between planning, learning, and control. Many of the same real-time vision issues can be used in augmented reality applications.