Michael Tsautschnig, Amazon
Automating Software Analysis at Large Scale
Abstract
Software model checking tools promise to deliver genuine traces to errors, and
sometimes even proofs of their absence. As static analysers, they do not require
concrete execution of programs, which may be even more beneficial when targeting
new platforms. Academic research focusses on improving scalability, yet largely
disregards practical technical challenges to make tools cope with real-world
code. At Amazon, both scalability requirements as well as real-world constraints
apply. Our prior work analysing more than 25,000 software packages in the
Debian/GNU Linux distribution containing more than 400 million lines of C code
not only led to more than 700 public bug reports, but also provided a solid
preparation for the challenges at Amazon.
Bio
Michael Tautschnig is a Research Scientist at Amazon Web Services and lecturer
at Queen Mary University of London. He obtained his PhD from Vienna University
of Technology developing an efficient, systematic, and automatic test input
generation technique for C programs. Prior to his appointment as lecturer he was
a post-doctoral research assistant at the University of Oxford.
A year ago he also joined Byron Cook's group at Amazon Web Services to apply his
research at scale.
His current and recent research focusses automating verification for real-world
software at large scale, and efficient software verification techniques for
concurrent, shared-memory systems with relaxed memory models.