Maryam Mehri Dehnavi



Assistant Professor of Computer Science

Canada Research Chair in Parallel and Distributed Computing


Checkout our lab Paramathics for latest news on our research!
MSc, PhD, and Postdoctoral positions available for Fall, 2022: Positions (PDF)


Research: My research group, ParaMathics, works on various aspects of cloud computing, machine learning, numerical analysis, compilers, programming languages, and high-performance computing. We develop scalable numerical methods, high-performance libraries, and domain-specific languages and compilers for high-performance and cloud computing platforms. CV (PDF)

News


  • I am appointed as the Canada Research Chair in Parallel and Distributed Computing.
  • I am the recipient of the Ontario Early Researcher Award 2021.
  • "HDAGG: Hybrid Aggregation in Sparse Matrix Computations" to appear at IPDPS22.
  • "Randomized Gossiping with Effective Resistance Weights: Performance Guarantees and Applications" accepted at IEEE Transactions on Control of Network Systems 2022. Paper
  • "Composing Loop-carried Dependence with Other Loops" to appear at PPoPP22. Paper
  • "L-DQN: An Asynchronous Limited-Memory Distributed Quasi-Newton Method" accepted at IEEE Conference on Decision and Control, CDC21. Paper
  • "NASOQ: Numerically Accurate Sparsity-Oriented QP Solver" accepted at SIGGRAPH20. Paper
  • "MatRox: Modular approach for improving data locality in Hierarchical (Mat)rix App(Rox)imation" accepted at PPoPP20. Paper
  • Our project on Neurocomputation of Brain-body Interactions is awarded the NSERC New Frontiers in Research Fund.
  • Kazem Cheshmi receives the 2020 ACM-IEEE CS George Michael Memorial HPC fellowship.
  • Our work TENGraD on a Time-efficient Natural Gradient Descent method is now online! Paper
  • Checkout our work on “Vectorizing Sparse Matrix Codes with Dependency Driven Trace Analysis!” Paper
  • DAve-QN is accepted at AISTATS2020. Paper
  • "Sparse Computation Data Dependence Simplification for Efficient Compiler-Generated Inspectors" accepted at PLDI19. Paper
  • Our ASYNC work that supports asynchronous machine learning on the cloud is accepted IPDPS 2020! Paper
  • Read about our work at UofT News.
  • "ParSy: Inspection and Transformation of Sparse Matrix Computations" accepted at SC18. Paper
  • Our research on "Communication-Efficient Algorithms for Machine learning" receives an NSF award.
  • "CSTF: Large-Scale Sparse Tensor Factorizations on Distributed Platforms" accepted at ICPP18. Paper
  • Kazem Cheshmi receives the 2018 Adobe Research Fellowship.
  • Sympiler is now online!
  • "Sympiler: Transforming Sparse Matrix Codes by Decoupling Symbolic Analysis" accepted at SC17. Paper
  • Kazem Cheshmi wins First Place in the 2017 Grand Finals of the ACM’s Student Research Competition for our work on "Decoupling Symbolic from Numeric in Sparse Matrix Computations." The SRC Grand Finals are the culmination of a year-long competition that involved more than 300 students presenting research projects at 25 major ACM conferences.
  • Maryam Dehnavi receives the NSF CRII grant on Performance-in-Depth Sparse Solvers for Heterogeneous Parallel Platforms.




  • Web design inspired by Michael Carbin