Meghdad Kurmanji
AI Research Scientist at IQVIA | Visiting Researcher at University of Cambridge
AI Research Scientist
AI Scientist building scalable generative AI systems
I work across the full machine learning lifecycle, from designing new models to building production-grade pipelines for enterprise AI systems. My recent work spans large-scale model training, multimodal learning, agentic LLM systems, and safety-critical AI.
My research has focused on machine unlearning, decentralized LLM pre-training, privacy-preserving learning, and AI safety, with publications in venues including NeurIPS, ICLR, and SIGMOD and funded work on privacy-preserving and safety-critical machine learning systems.
News
-
January 2026
I started a new position as an AI Scientist at IQVIA.
- January 2026
-
January 2026
Our proposal received a GBP 150k grant through Foresight's AI Safety call. We will use interpretability to enable precise unlearning, even in challenging scenarios.
-
December 2025
I attended NeurIPS in Copenhagen and presented 2 posters and 2 oral talks.
-
November 2025
One paper was accepted at AAAI's Alignment Track: link.
Selected Publications
DEPT: Decoupled Embeddings for Pre-training Language Models
ICLR 2025Research on pre-training language models with decoupled embeddings, recognized among the top 1 percent of submissions.
Bridge the Gaps between Machine Unlearning and AI Regulation
NeurIPS 2025Connects machine unlearning research with regulatory requirements and highlights the gap between current technical methods and compliance needs in practice.
What Makes Unlearning Hard and What to Do About It
NeurIPS 2024Studies the core obstacles behind effective unlearning and lays out practical directions for building more reliable removal methods under real utility constraints.
Machine Unlearning in Learned Databases
SIGMOD 2024Introduces unlearning in learned database systems and shows how removal requirements interact with approximation, indexing, and database-facing model maintenance.
Towards Unbounded Machine Unlearning
NeurIPS 2023Explores scalable unlearning beyond narrow one-shot settings, focusing on repeated removal requests and stable model performance over time.