1 code implementation • 9 May 2024 • Dan Qiao, Yi Su, Pinzheng Wang, Jing Ye, Wenjing Xie, Yuechi Zhou, Yuyang Ding, Zecheng Tang, Jikai Wang, Yixin Ji, Yue Wang, Pei Guo, Zechen Sun, Zikang Zhang, Juntao Li, Pingfu Chao, Wenliang Chen, Guohong Fu, Guodong Zhou, Qiaoming Zhu, Min Zhang
Large Language Models (LLMs) have played an important role in many fields due to their powerful capabilities. However, their massive number of parameters leads to high deployment requirements and incurs significant inference costs, which impedes their practical applications.
no code implementations • 8 Apr 2024 • Jing Ye, Minzhi Fan, XiaoYu Zhang, Shasha Lu, Mengyao Chai, Yunshan Zhang, Xiaoyu Zhao, Shuang Li, Diming Zhang
Graphene based nanomaterials have attracted significant attention for their potentials in biomedical and biotechnology applications in recent years, owing to the outstanding physical and chemical properties.
no code implementations • 26 Mar 2024 • Xinpei Zhao, Jingyuan Sun, Shaonan Wang, Jing Ye, Xiaohan Zhang, Chengqing Zong
In contrast, we propose a simple yet effective method that guides text reconstruction by directly comparing them with the predicted text embeddings mapped from brain activities.
1 code implementation • 15 Sep 2023 • Shiyi Zhu, Jing Ye, Wei Jiang, Siqiao Xue, Qi Zhang, Yifan Wu, Jianguo Li
In fact, anomalous behaviors harming long context extrapolation exist between Rotary Position Embedding (RoPE) and vanilla self-attention unveiled by our work.
1 code implementation • IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2018 • Beibei Jin, Yu Hu, Yiming Zeng, Qiankun Tang, Shice Liu, Jing Ye
For the KTH dataset, the VarNet outperforms the state-of-the-art works up to 11. 9% on PSNR and 9. 5% on SSIM.
Ranked #1 on Video Prediction on KTH (Cond metric)