Meghdad Kurmanji

AI Research Scientist at IQVIA | Visiting Researcher at University of Cambridge

AI Research Scientist

AI Scientist building scalable generative AI systems

I work across the full machine learning lifecycle, from designing new models to building production-grade pipelines for enterprise AI systems. My recent work spans large-scale model training, multimodal learning, agentic LLM systems, and safety-critical AI.

My research has focused on machine unlearning, decentralized LLM pre-training, privacy-preserving learning, and AI safety, with publications in venues including NeurIPS, ICLR, and SIGMOD and funded work on privacy-preserving and safety-critical machine learning systems.

Portrait of Meghdad Kurmanji

News

  1. January 2026
    I started a new position as an AI Scientist at IQVIA.
  2. January 2026
    Two papers got accepted to ICLR 2026: PDF 1, PDF 2.
  3. January 2026
    Our proposal received a GBP 150k grant through Foresight's AI Safety call. We will use interpretability to enable precise unlearning, even in challenging scenarios.
  4. December 2025
    I attended NeurIPS in Copenhagen and presented 2 posters and 2 oral talks.
  5. November 2025
    One paper was accepted at AAAI's Alignment Track: link.
View all news

Selected Publications

What Makes Unlearning Hard and What to Do About It

NeurIPS 2024

Kai Zhao, Meghdad Kurmanji, George Barbulescu, Efi Triantafillou, Peter Triantafillou

Studies the core obstacles behind effective unlearning and lays out practical directions for building more reliable removal methods under real utility constraints.

Machine Unlearning in Learned Databases

SIGMOD 2024

Meghdad Kurmanji, Efi Triantafillou, Peter Triantafillou

Introduces unlearning in learned database systems and shows how removal requirements interact with approximation, indexing, and database-facing model maintenance.

Towards Unbounded Machine Unlearning

NeurIPS 2023

Meghdad Kurmanji, Peter Triantafillou, Jamie Hayes, Efi Triantafillou

Explores scalable unlearning beyond narrow one-shot settings, focusing on repeated removal requests and stable model performance over time.