Image Representations

Contrastive Language-Image Pre-training

Introduced by Radford et al. in Learning Transferable Visual Models From Natural Language Supervision

Contrastive Language-Image Pre-training (CLIP), consisting of a simplified version of ConVIRT trained from scratch, is an efficient method of image representation learning from natural language supervision. , CLIP jointly trains an image encoder and a text encoder to predict the correct pairings of a batch of (image, text) training examples. At test time the learned text encoder synthesizes a zero-shot linear classifier by embedding the names or descriptions of the target dataset’s classes.

For pre-training, CLIP is trained to predict which of the $N X N$ possible (image, text) pairings across a batch actually occurred. CLIP learns a multi-modal embedding space by jointly training an image encoder and text encoder to maximize the cosine similarity of the image and text embeddings of the $N$ real pairs in the batch while minimizing the cosine similarity of the embeddings of the $N^2 - N$ incorrect pairings. A symmetric cross entropy loss is optimized over these similarity scores.

Image credit: Learning Transferable Visual Models From Natural Language Supervision

Source: Learning Transferable Visual Models From Natural Language Supervision

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Language Modelling 82 8.11%
Retrieval 44 4.35%
Zero-Shot Learning 43 4.25%
Image Generation 41 4.06%
Semantic Segmentation 41 4.06%
Image Classification 32 3.17%
Large Language Model 24 2.37%
Object Detection 20 1.98%
Image Captioning 20 1.98%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories