Learn more →
Back to Expert Scholars
research / researchcomputational pathology, foundation models, CONCH, TITAN, weakly supervised learning, whole-slide image analysisBispecific T-cell Engager Pioneer

Faisal Mahmood

费萨尔·马哈茂德

PhD

🏢Brigham and Women's Hospital; Harvard Medical School(哈佛医学院布莱根妇女医院)🌐USA

Associate Professor of Pathology, Harvard Medical School; Director, Computational Pathology and AI Lab, BWH/HMS哈佛医学院病理学副教授;BWH/HMS计算病理学与人工智能实验室主任

52
h-index
4
Key Papers
5
Awards
4
Key Contributions

👥Biography 个人简介

Faisal Mahmood, PhD is Associate Professor of Pathology at Harvard Medical School and Director of the Computational Pathology and AI Laboratory at Brigham and Women's Hospital (BWH/HMS). He is one of the world's leading researchers in computational pathology and AI-driven oncology, best known for developing large-scale foundation models that enable AI systems to interpret pathology images with human-expert-level proficiency across diverse cancer types. His laboratory developed CONCH (CONtrastive learning from Captions for Histopathology), a vision-language foundation model for pathology pre-trained on over 1.17 million image-caption pairs, which achieved state-of-the-art performance in cancer subtype classification, survival prediction, and zero-shot pathology tasks. He subsequently introduced TITAN, a multimodal whole-slide foundation model that integrates slide-level and patch-level visual representations with pathology-specific language grounding, enabling generalist pathology AI with strong transfer learning capabilities. Dr. Mahmood pioneered weakly supervised multiple instance learning approaches for slide-level prediction from gigapixel pathology images, enabling cancer diagnosis, molecular subtype classification, and prognosis prediction without pixel-level annotation. His group has demonstrated AI models that can predict microsatellite instability (MSI), BRCA mutation status, genomic signatures, and immunotherapy response directly from H&E-stained tissue sections. He has published extensively in Nature Medicine, Nature Biomedical Engineering, Cancer Cell, and NeurIPS, and his models are among the most widely adopted open-source tools in computational pathology.

Share:

🧪Research Fields 研究领域

Computational Pathology — Deep Learning for Whole-Slide Image Analysis and Tumor Microenvironment Characterization计算病理学——用于全切片图像分析和肿瘤微环境表征的深度学习
Foundation Models for Pathology — CONCH and TITAN Vision-Language Models for Oncology病理学基础模型——CONCH和TITAN肿瘤学视觉语言模型
Weakly Supervised Learning — Multiple Instance Learning for Pathology Slide Classification弱监督学习——用于病理切片分类的多实例学习
Multimodal Cancer AI — Integration of Pathology, Genomics, and Clinical Data多模态癌症AI——病理学、基因组学与临床数据整合
AI-Driven Biomarker Discovery — Predicting Molecular Subtypes and Treatment Response from H&E ImagesAI驱动的生物标志物发现——从H&E图像预测分子亚型和治疗反应

🎓Key Contributions 主要贡献

CONCH — Vision-Language Foundation Model for Computational Pathology

Developed CONCH (CONtrastive learning from Captions for Histopathology), a large-scale vision-language foundation model for pathology pre-trained on over 1.17 million pathology image-caption pairs using contrastive and generative objectives. Published in Nature Medicine (2024), CONCH achieved state-of-the-art performance across 14 diverse pathology tasks including cancer subtype classification, survival prediction, and zero-shot retrieval, demonstrating that language supervision substantially improves pathology AI generalization across cancer types and institutions without task-specific fine-tuning.

TITAN — Multimodal Whole-Slide Foundation Model

Introduced TITAN, a multimodal whole-slide image foundation model that jointly learns slide-level visual representations and natural language pathology descriptions at gigapixel scale. TITAN addressed the fundamental challenge of whole-slide image analysis by enabling coherent global slide-level reasoning while preserving local patch-level detail, achieving superior performance on cancer diagnosis, molecular prediction, and survival tasks. The model provided a general-purpose architecture for pathology AI that transfers efficiently to new tissue types and clinical tasks with minimal additional training data.

Weakly Supervised MIL for Genomic and Molecular Prediction from Pathology

Pioneered multiple instance learning (MIL) frameworks for predicting molecular and genomic features from H&E-stained whole-slide images without requiring pixel-level or region-level annotations, using only slide-level labels for training. Published landmark studies demonstrating AI prediction of microsatellite instability (MSI), homologous recombination deficiency (HRD), BRCA1/2 mutation, TMB, and TCGA pan-cancer molecular subtypes from routine pathology slides, opening a pathway to genomic biomarker assessment in resource-limited settings where molecular testing is unavailable.

Multimodal Survival Prediction — Pathology-Genomics Integration

Developed PORPOISE and SurvPath, multimodal deep learning frameworks that co-learn from paired pathology images and bulk RNA-seq or mutational profiles to predict patient survival across 14 TCGA cancer types. These models outperformed unimodal pathology or genomics approaches and provided interpretable attention-based survival risk maps highlighting histological regions most predictive of patient outcomes, demonstrating the complementarity of morphological and molecular information for cancer prognosis.

Representative Works 代表性著作

[1]

A Vision-Language Foundation Model for Computational Pathology (CONCH)

Nature Medicine (2024)

CONCH vision-language foundation model for pathology pre-trained on 1.17M image-caption pairs, achieving state-of-the-art performance across 14 cancer pathology tasks including zero-shot classification and retrieval.

[2]

Pan-Cancer Integrative Histology-Genomic Analysis via Multimodal Deep Learning

Cancer Cell (2022)

PORPOISE multimodal framework integrating pathology images and genomics for survival prediction across 14 TCGA cancer types, demonstrating that paired histology-genomics learning outperforms unimodal approaches.

[3]

Weakly Supervised Computational Pathology for Whole-Slide-Image Classification

Nature Biomedical Engineering (2021)

CLAM (Clustering-constrained Attention Multiple Instance Learning) framework for weakly supervised whole-slide image classification, enabling cancer subtyping and grade prediction without slide-level annotations.

[4]

Predicting Cancer Outcomes from Histology and Genomics using Convolutional Networks

PLOS Computational Biology (2019)

Early demonstration of deep learning survival prediction from histopathology and genomics integration across TCGA glioma and clear cell renal carcinoma cohorts.

🏆Awards & Recognition 奖项与荣誉

🏆NIH Director's New Innovator Award
🏆NSF CAREER Award
🏆AACR NextGen Star Award
🏆BWH/HMS Young Investigator Award in Computational Pathology
🏆NeurIPS Outstanding Paper Award (Computational Biology Track)

📄Data Sources 数据来源

Last updated: 2026-04-06 | All information from publicly available academic sources

关注 费萨尔·马哈茂德 的研究动态

Follow Faisal Mahmood's research updates

留下邮箱,当我们发布与 Faisal Mahmood(Brigham and Women's Hospital; Harvard Medical School)相关的新研究或访谈时,我们会通知你。

我们不会泄露你的信息,也不会发送无关内容。随时可以退订。

Explore More Experts

Discover the researchers shaping the future of cancer treatment