Vision and Language Pre-Trained Models

Pathology Language and Image Pre-Training

Introduced by Huang et al. in Leveraging medical Twitter to build a visual–language foundation model for pathology AI

Pathology Language and Image Pre-Training (PLIP) is a vision-and-language foundation model created by fine-tuning CLIP on pathology images.

Source: Leveraging medical Twitter to build a visual–language foundation model for pathology AI

Papers


Paper Code Results Date Stars

Tasks


Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories