Linear Warmup With Linear Decay is a learning rate schedule in which we increase the learning rate linearly for $n$ updates and then linearly decay afterwards.
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Retrieval | 114 | 12.51% |
Language Modelling | 102 | 11.20% |
Question Answering | 58 | 6.37% |
Large Language Model | 35 | 3.84% |
Sentiment Analysis | 32 | 3.51% |
Sentence | 31 | 3.40% |
Text Classification | 28 | 3.07% |
Information Retrieval | 21 | 2.31% |
Text Generation | 19 | 2.09% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |