Pathology Language and Image Pre-Training (PLIP) is a vision-and-language foundation model created by fine-tuning CLIP on pathology images.
Source: Leveraging medical Twitter to build a visual–language foundation model for pathology AIPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Benchmarking | 1 | 11.11% |
Decision Making | 1 | 11.11% |
Image Classification | 1 | 11.11% |
Image Retrieval | 1 | 11.11% |
Retrieval | 1 | 11.11% |
Zero-Shot Learning | 1 | 11.11% |
Pedestrian Attribute Recognition | 1 | 11.11% |
Person Re-Identification | 1 | 11.11% |
Text based Person Retrieval | 1 | 11.11% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |