Meghdad Kurmanji

AI Research Scientist at IQVIA | Visiting Researcher at University of Cambridge

AI Research Scientist

AI Scientist building scalable generative AI systems

I work across the full machine learning lifecycle, from designing new models to building production-grade pipelines for enterprise AI systems. My recent work spans large-scale model training, multimodal learning, agentic LLM systems, and safety-critical AI.

My research has focused on machine unlearning, decentralized LLM pre-training, privacy-preserving learning, and AI safety, with publications in venues including NeurIPS, ICLR, and SIGMOD and funded work on privacy-preserving and safety-critical machine learning systems.

Portrait of Meghdad Kurmanji

News

  1. May 2026

    I was Recognized as an ICML 2026 Gold Reviewer (among the conference’s top reviewers)!!

  2. January 2026

    Two papers accepted to ICLR 2026 (paper 1, paper 2).

  3. January 2026

    Started a new role as Senior AI Research Scientist at IQVIA.

  4. December 2025

    Presented two posters and two oral talks at NeurIPS 2025 in Copenhagen.

  5. November 2025

    Paper accepted at AAAI 2026 (Alignment Track) (link).

View all news

Selected Publications

What Makes Unlearning Hard and What to Do About It

NeurIPS 2024

Kai Zhao, Meghdad Kurmanji, George Barbulescu, Efi Triantafillou, Peter Triantafillou

Studies the core obstacles behind effective unlearning and lays out practical directions for building more reliable removal methods under real utility constraints.

Machine Unlearning in Learned Databases

SIGMOD 2024

Meghdad Kurmanji, Efi Triantafillou, Peter Triantafillou

Introduces unlearning in learned database systems and shows how removal requirements interact with approximation, indexing, and database-facing model maintenance.

Towards Unbounded Machine Unlearning

NeurIPS 2023

Meghdad Kurmanji, Peter Triantafillou, Jamie Hayes, Efi Triantafillou

Explores scalable unlearning beyond narrow one-shot settings, focusing on repeated removal requests and stable model performance over time.