no code implementations • 29 May 2024 • Zelin Peng, Zhengqin Xu, Zhilin Zeng, Yaoming Wang, Lingxi Xie, Qi Tian, Wei Shen
Since the PEFT strategy is conducted symmetrically to the two CLIP modalities, the misalignment between them is mitigated.
no code implementations • 28 Nov 2023 • Zelin Peng, Zhengqin Xu, Zhilin Zeng, Lingxi Xie, Qi Tian, Wei Shen
Parameter-efficient fine-tuning (PEFT) is an effective methodology to unleash the potential of large foundation models in novel scenarios with limited training data.
no code implementations • 28 Aug 2023 • Zelin Peng, Zhengqin Xu, Zhilin Zeng, Xiaokang Yang, Wei Shen
Most existing fine-tuning methods attempt to bridge the gaps among different scenarios by introducing a set of new parameters to modify SAM's original parameter space.