One Model to Rule them All: Towards Universal Segmentation for Medical Images with Text Prompts

28 Dec 2023  ·  Ziheng Zhao, Yao Zhang, Chaoyi Wu, Xiaoman Zhang, Ya zhang, Yanfeng Wang, Weidi Xie ·

In this study, we focus on building up a model that aims to Segment Anything in medical scenarios, driven by Text prompts, termed as SAT. Our main contributions are three folds: (i) for dataset construction, we combine multiple knowledge sources to construct the first multi-modal knowledge tree on human anatomy, including 6502 anatomical terminologies; Then we build up the largest and most comprehensive segmentation dataset for training, by collecting over 22K 3D medical image scans from 72 segmentation datasets with careful standardization on both image scans and label space; (ii) for architecture design, we formulate a universal segmentation model, that can be prompted by inputting medical terminologies in text form. We present knowledge-enhanced representation learning on the combination of a large number of datasets; (iii) for model evaluation, we train a SAT-Pro with only 447M parameters, to segment 72 different segmentation datasets with text prompt, resulting in 497 classes. We have thoroughly evaluated the model from three aspects: averaged by body regions, averaged by classes, and average by datasets, demonstrating comparable performance to 72 specialist nnU-Nets, i.e., we train nnU-Net models on each dataset/subset, resulting in 72 nnU-Nets with around 2.2B parameters for the 72 datasets. We will release all the codes, and models in this work.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods