ML-enabled systems can do wonderful things, but there is no guarantee that they behave correctly in all scenarios. On the other hand, automated reasoning (AR) techniques are designed specifically to provide correctness guarantees for systems. In this talk, I will survey several research directions that aim to improve the safety of AI systems by applying automated reasoning (AR) techniques. We will consider applications to robustness, explainability, and AI-generated code.