Faisal Mahmood
费萨尔·马哈茂德
PhD
Associate Professor of Pathology, Harvard Medical School; Director, Computational Pathology and AI Lab, BWH/HMS哈佛医学院病理学副教授;BWH/HMS计算病理学与人工智能实验室主任
👥Biography 个人简介
Faisal Mahmood, PhD is Associate Professor of Pathology at Harvard Medical School and Director of the Computational Pathology and AI Laboratory at Brigham and Women's Hospital (BWH/HMS). He is one of the world's leading researchers in computational pathology and AI-driven oncology, best known for developing large-scale foundation models that enable AI systems to interpret pathology images with human-expert-level proficiency across diverse cancer types. His laboratory developed CONCH (CONtrastive learning from Captions for Histopathology), a vision-language foundation model for pathology pre-trained on over 1.17 million image-caption pairs, which achieved state-of-the-art performance in cancer subtype classification, survival prediction, and zero-shot pathology tasks. He subsequently introduced TITAN, a multimodal whole-slide foundation model that integrates slide-level and patch-level visual representations with pathology-specific language grounding, enabling generalist pathology AI with strong transfer learning capabilities. Dr. Mahmood pioneered weakly supervised multiple instance learning approaches for slide-level prediction from gigapixel pathology images, enabling cancer diagnosis, molecular subtype classification, and prognosis prediction without pixel-level annotation. His group has demonstrated AI models that can predict microsatellite instability (MSI), BRCA mutation status, genomic signatures, and immunotherapy response directly from H&E-stained tissue sections. He has published extensively in Nature Medicine, Nature Biomedical Engineering, Cancer Cell, and NeurIPS, and his models are among the most widely adopted open-source tools in computational pathology.
🧪Research Fields 研究领域
🎓Key Contributions 主要贡献
CONCH — Vision-Language Foundation Model for Computational Pathology
Developed CONCH (CONtrastive learning from Captions for Histopathology), a large-scale vision-language foundation model for pathology pre-trained on over 1.17 million pathology image-caption pairs using contrastive and generative objectives. Published in Nature Medicine (2024), CONCH achieved state-of-the-art performance across 14 diverse pathology tasks including cancer subtype classification, survival prediction, and zero-shot retrieval, demonstrating that language supervision substantially improves pathology AI generalization across cancer types and institutions without task-specific fine-tuning.
TITAN — Multimodal Whole-Slide Foundation Model
Introduced TITAN, a multimodal whole-slide image foundation model that jointly learns slide-level visual representations and natural language pathology descriptions at gigapixel scale. TITAN addressed the fundamental challenge of whole-slide image analysis by enabling coherent global slide-level reasoning while preserving local patch-level detail, achieving superior performance on cancer diagnosis, molecular prediction, and survival tasks. The model provided a general-purpose architecture for pathology AI that transfers efficiently to new tissue types and clinical tasks with minimal additional training data.
Weakly Supervised MIL for Genomic and Molecular Prediction from Pathology
Pioneered multiple instance learning (MIL) frameworks for predicting molecular and genomic features from H&E-stained whole-slide images without requiring pixel-level or region-level annotations, using only slide-level labels for training. Published landmark studies demonstrating AI prediction of microsatellite instability (MSI), homologous recombination deficiency (HRD), BRCA1/2 mutation, TMB, and TCGA pan-cancer molecular subtypes from routine pathology slides, opening a pathway to genomic biomarker assessment in resource-limited settings where molecular testing is unavailable.
Multimodal Survival Prediction — Pathology-Genomics Integration
Developed PORPOISE and SurvPath, multimodal deep learning frameworks that co-learn from paired pathology images and bulk RNA-seq or mutational profiles to predict patient survival across 14 TCGA cancer types. These models outperformed unimodal pathology or genomics approaches and provided interpretable attention-based survival risk maps highlighting histological regions most predictive of patient outcomes, demonstrating the complementarity of morphological and molecular information for cancer prognosis.
Representative Works 代表性著作
A Vision-Language Foundation Model for Computational Pathology (CONCH)
Nature Medicine (2024)
CONCH vision-language foundation model for pathology pre-trained on 1.17M image-caption pairs, achieving state-of-the-art performance across 14 cancer pathology tasks including zero-shot classification and retrieval.
Pan-Cancer Integrative Histology-Genomic Analysis via Multimodal Deep Learning
Cancer Cell (2022)
PORPOISE multimodal framework integrating pathology images and genomics for survival prediction across 14 TCGA cancer types, demonstrating that paired histology-genomics learning outperforms unimodal approaches.
Weakly Supervised Computational Pathology for Whole-Slide-Image Classification
Nature Biomedical Engineering (2021)
CLAM (Clustering-constrained Attention Multiple Instance Learning) framework for weakly supervised whole-slide image classification, enabling cancer subtyping and grade prediction without slide-level annotations.
Predicting Cancer Outcomes from Histology and Genomics using Convolutional Networks
PLOS Computational Biology (2019)
Early demonstration of deep learning survival prediction from histopathology and genomics integration across TCGA glioma and clear cell renal carcinoma cohorts.
🏆Awards & Recognition 奖项与荣誉
📄Data Sources 数据来源
Last updated: 2026-04-06 | All information from publicly available academic sources
Related Experts 相关专家
Kun-Hsing Yu
Harvard Medical School; Brigham and Women's Hospital
Ziad Obermeyer
University of California, Berkeley; UC San Francisco
Andrew H. Beck
PathAI; Harvard Medical School (formerly)
Olivier Gevaert
Stanford University
关注 费萨尔·马哈茂德 的研究动态
Follow Faisal Mahmood's research updates
留下邮箱,当我们发布与 Faisal Mahmood(Brigham and Women's Hospital; Harvard Medical School)相关的新研究或访谈时,我们会通知你。
Explore More Experts
Discover the researchers shaping the future of cancer treatment