Hey, I’m a PhD Candidate at the MIT Media Lab. My research focuses on training and evaluating large language models, their social impact and governance.
2023.05: New paper on A Pretrainer’s Guide to Training Data.
2023.01-05: Invited talks on ‘Effective Instruction Tuning: Data, Methods, & New Abilities’ at Apple, Oracle, Kailua Labs, Databricks, and Amazon.
2023.02-06: Co-instructor for MIT’s Generative AI course MAS.S68.
2023.03: A co-lead for Cohere for AI’s (C4AI) community research effort on Multilingual Instruction tuning.