Sonia Laguna
Sonia Laguna

PhD in Machine Learning @ETH Zurich

Previously @Google @Harvard

About Me

Hi there!👋 I am a PhD student in Machine Learning at ETH Zurich, supervised by Prof. Julia Vogt (ETH) and Prof. Bernhard Schölkopf (MPI). My current research focuses on solving problems on interpretability of machine learning methods, as well as on the development of generative models and gaining a better understanding and control of them (diffusion, VAEs, LLMs) through their representations. In parallel, I am working on the development of foundation models as part of the Swiss AI initiative to better understand VLMs and exploit their capabilities in real-world clinical applications.

I am currently a visiting student at Cambridge University with Prof. Mihaela Van der Shaar working on alignment and interpretability of LLMs. During my PhD, I have been a Research Intern and a Student Researcher at Google, developing 3D diffusion-based generative models in the AR&VR team, and I am the team co-leader of CSNOW, Computer Science Network of Women at ETH.

Prior to my doctoral studies, I obtained an MSc in the Department of Information Technology and Electrical Engineering at ETH Zurich, and spent a semester at Harvard University working on 3D generative models for super-resolution of MR images. I was lucky to be supported by two Spanish Excellence Fellowships, La Caixa and Rafel del Pino. Before that, I completed my BSc in Biomedical Engineering at Universidad Carlos III de Madrid, spent one year at Georgia Institute of Technology, and carried out an internship at ETH Zurich as an Amgen Scholar.

I am always happy to collaborate and discuss new topics, feel free to reach out! 😃💡

Interests
  • Interpretability
  • Generative Models
  • Foundation Models
  • Representation Learning
🗞️ Recent News

See all News »

📝 Featured Publications

See all Publications » or checkout my Google Scholar »

(2025). RadVLM: A Multitask Conversational Vision-Language Model for Radiology. Preprint - Arxiv.
(2024). Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable. In NeurIPS 2024.
(2024). Stochastic Concept Bottleneck Models. In NeurIPS 2024.
(2024). Exploiting Interpretable Capabilities with Concept-Enhanced Diffusion and Prototype Networks. In NeurIPS 2024 Workshop Interpretable AI (Oral).
(2024). Deep Generative Clustering With Multimodal Diffusion Variational Autoencoders. In ICLR 2024.