Sara Kangaslahti

prof_pic.jpg

I am a third year PhD candidate in the ML foundations group at Harvard University advised by David Alvarez-Melis. I am thankful to be supported by an NSF Graduate Research Fellowship. My research focuses on principled data-centric approaches for adapting and understanding LLMs. Recently, I have been working on finding ways to compress and connect models across scales and tasks.

Previously, I completed my Bachelor’s in Computer Science at Caltech, where I worked with Anima Anandkumar and R. Michael Alvarez on scalable tensor-based topic modeling methods.

My email is sarakangaslahti (at) g (dot) harvard (dot) edu. Please feel free to reach out to discuss research!

news

Jan 26, 2026 Two of my papers were accepted to ICLR 2026: 🪃 Boomerang Distillation Enables Zero-Shot Model Size Interpolation 🪃 and Hidden Breakthroughs in Language Model Training!
Dec 06, 2025 My work Boomerang Distillation Enables Zero-Shot Model Size Interpolation was published at the NeurIPS 2025 UniReps Workshop as part of the blogpost track. Check out our post on the UniReps blog!
Sep 01, 2025 My paper Continuous Language Model Interpolation yields Dynamic and Controllable Text Generation was published at TMLR!