Reproducible scaling laws for contrastive language-image learning

Scaling up neural networks has led to remarkable performance across a wide range of tasks. Moreover, performance often follows reliable scaling laws as a function of training set size, model size, and compute, which offers valuable guidance as large-scale experiments are becoming increasingly expensive. However, previous work on scaling laws has primarily used private data \& models or focused on uni-modal language or vision learning. To address these limitations, we investigate scaling laws for contrastive language-image pre-training (CLIP) with the public LAION dataset and the open-source OpenCLIP repository. Our large-scale experiments involve models trained on up to two billion image-text pairs and identify power law scaling for multiple downstream tasks including zero-shot classification, retrieval, linear probing, and end-to-end fine-tuning. We find that the training distribution plays a key role in scaling laws as the OpenAI and OpenCLIP models exhibit different scaling behavior despite identical model architectures and similar training recipes. We open-source our evaluation workflow and all models, including the largest public CLIP models, to ensure reproducibility and make scaling laws research more accessible. Source code and instructions to reproduce this study will be available at https://github.com/LAION-AI/scaling-laws-openclip

PDF Abstract CVPR 2023 PDF CVPR 2023 Abstract

Results from the Paper


 Ranked #1 on Zero-Shot Image Classification on Country211 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Zero-Shot Image Classification Country211 OpenClip H/14 (34B)(Laion2B) Top-1 accuracy 30.01 # 1
Zero-Shot Cross-Modal Retrieval Flickr30k OpenCLIP VIT-H/14 Image-to-text R@1 - # 19
Image-to-text R@5 99.3 # 6
Image-to-text R@10 - # 18
Text-to-image R@1 - # 20
Text-to-image R@5 94.1 # 9
Text-to-image R@10 - # 18
Image Classification ImageNet OpenCLIP ViT-H/14 Top 1 Accuracy 88.5% # 50
Open Vocabulary Attribute Detection OVAD-Box benchmark Open CLIP ViT-B32 mean average precision 17.0 # 6

Methods