Sheza Munir
Responsible and Safe AI
I am a first-year PhD student in Computer Science at the University of Toronto, advised by Dr. Ishtiaque Ahmed. My research sits at the intersection of NLP and sociotechnical systems — I study data annotation as a human practice, examining how annotator expertise, lived experience, and disagreement shape the models we build.
Before Toronto, I completed my Master's at the University of Michigan, where I worked with Dr. Lu Wang on LLM factuality evaluation. My work spans annotation pipelines, fairness, and safety in AI — with a conviction that subjectivity and conflict are signal, not noise.
How annotator identity, expertise, and lived experience influence what gets labeled — and what gets erased — in training data.
Treating label conflict as meaningful signal. Building aggregation and reasoning frameworks for high-disagreement, socially sensitive tasks.
Benchmarking and probing the factual reliability of large language models, with a focus on long-form generation and hallucination triggers.
Ethical reasoning frameworks for AI systems. Deepfake detection and robustness in low-resource language settings.