Compress an ensemble of models into a single one by averaging their weights (under certain pre-conditions).
Source: Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference timePaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Domain Generalization | 2 | 15.38% |
Ensemble Learning | 1 | 7.69% |
Model Compression | 1 | 7.69% |
Few-Shot Learning | 1 | 7.69% |
graph partitioning | 1 | 7.69% |
Graph Sampling | 1 | 7.69% |
Language Modelling | 1 | 7.69% |
Translation | 1 | 7.69% |
Multi-Object Tracking | 1 | 7.69% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |