A Retrospective on My Academic Statement
It has almost been two years since I wrote my statement of purpose for PhD applications. It is perhaps a good time to look back. To see how far I have gone, and how much I have changed.
Dear Graduate Admissions Chair,
My intention to pursue graduate studies at the University of Toronto originates from my desire to be at the forefront of innovation. I am interested in joining the Machine Learning and Computer Vision group, advised (or co-advised) by Prof. Raquel Urtasun. I wish to contribute to autonomous driving research by developing fast and efficient deep learning models for perception, and exploring multi-agent systems for fleet coordination.
Since young, my parents instilled in me an existential need to be exceptional. Under this mandate, I chased after means to impress. And I did: I was decorated with medals from national competitions in mathematics, computer science, and physics. After completing my International Baccalaureate Diploma with a score of 42⁄45, I competed as a member of the Canadian team at International Physics Olympiad 2013 in Denmark. The honourable mention placed me among the top 200 physics students in the world.
In years to follow, my desire for distinction naturally morphed into a pursuit for impact. Though I love the beauty of theory, I believe that knowledge is only meaningful if it can induce change. Thus Computer Science at the University of Waterloo became the natural choice, since it offered me practical tools in addition to in-depth theory. With internships at Bloomberg, LinkedIn, Citadel, and Facebook, I learned a lot about software engineering, and more importantly, myself. My most notable achievements include saving 400+ machines at LinkedIn by re-designing a core module in the data infrastructure, and reducing portfolio metrics computation time by 95% at Citadel by re-architecting an analytics framework to leverage Spark.
But ultimately I felt out of touch with my truth aspirations. I was not progressing the forefront of innovation; I was working towards incremental instead of revolutionary impact. This mindset led me to explore research in physics, computational finance, and most recently, machine learning. While the topics span a broad spectrum, they share the common goal of modeling the world to better understand it.
For example, I participated in the University Physics Competition to indulge my passion for understanding space. It was fascinating to investigate if and how a space probe can use gravity assist maneuver to enter Jupiter’s Orbit. My team and I used the Tsiolkovsky Rocket Equation to model the theoretical fuel consumption and solved for the optimal slingshot position of Jupiter’s moon Io through simulation. We were awarded the gold medal for best paper at the end of the 48-hour contest. In the following year, we modeled stability of planetary orbits in a binary star system and won the silver medal with our submission.
Later, I became interested in understanding the dynamics of financial instruments. Under Prof. Justin Wan’s guidance, I made my foray into computational finance by researching parallel algorithms for option pricing. I was quick to grasp the fundamental models, and at the end, I designed a custom OpenCL kernel that prices European options using the Binomial Lattice model. Unlike the Monte Carlo method, it is challenging to parallelize the Binomial Lattice method due to its iterative computation. My algorithm tackles the issue by dividing computation into triangle-shaped batches to minimize global memory access and inter-workgroup synchronization. As a result, it achieves hundred fold speedup as compared to a standard CPU-based implementation. The algorithm is published as a workshop paper at WHPCF 2015.
At Citadel, I witnessed the crucial role of machine learning in predicting portfolio impact of macroeconomic shocks. I have since been fascinated by the power of applied machine learning, and as a result, this has largely defined my current research interest. To enrich my knowledge, I completed online courses such as Andrew Ng’s Machine Learning, Daphne Koller’s Probabilistic Graphical Model, and Fei-Fei Li’s Convolutional Neural Networks for Visual Recognition. I also enrolled in a graduate level deep learning course taught by Prof. Ming Li.
My love for Chinese literature led to a research project in Chinese poetry generation. Through this project, I gained a solid understanding of the current techniques in neural machine translation such as sequence-to-sequence models, attention mechanisms, and memory networks. I observed that the state-of-the-art poetry generation models fail to explicitly incorporate prior information (such as poetic rules), and proposed several solutions. For example, I outlined a method for augmenting the word embedding to improve vertical alignment of the generated poems. Furthermore, I proposed a reinforcement learning based approach to combine learned distribution with heuristics. Our paper is submitted to AAAI 2018.
Most recently, I have been affiliated with Uber Advanced Technologies Group. I worked with Prof. Raquel Urtasun and Shenlong Wang to develop novel deep learning models for visual perception in autonomous driving. The details of this project is currently confidential, but my contribution has placed me as second author on our paper submitted to CVPR 2018.
In terms of future directions, I am excited to continue my research with Prof. Raquel Urtasun, with focus on developing perception algorithms specialized for autonomous driving. As a believer of capsule networks, I would love to contribute to its development with Prof. Jimmy Ba and Prof. Geoffrey Hinton. Lastly, I am interested in exploring the application of multi-agent reinforcement learning on autonomous fleet coordination.
Ultimately, I see myself as an industry research scientist/engineer working on projects that directly impact our lives. I believe the University of Toronto has the depth of expertise and industry connections that can best empower me to achieve my goals. It would be my honour to participate in the newest revolution in neural networks research with the very people who kickstarted the field.
Shun Da Suo