1 code implementation • 20 Mar 2024 • Yuxuan Zhou, Xingxing Li, Shengyu Li, Xuanbin Wang, Shaoquan Feng, Yuxuan Tan
Visual simultaneous localization and mapping (VSLAM) has broad applications, with state-of-the-art methods leveraging deep neural networks for better robustness and applicability.
1 code implementation • 13 Dec 2023 • Wenqian Zhang, Molin Huang, Yuxuan Zhou, Juze Zhang, Jingyi Yu, Jingya Wang, Lan Xu
We further provide a strong baseline method, BOTH2Hands, for the novel task: generating vivid two-hand motions from both implicit body dynamics and explicit text prompts.
no code implementations • 22 Nov 2023 • Yuxuan Zhou, Liangcai Gao, Zhi Tang, Baole Wei
Scene Text Image Super-Resolution (STISR) aims to enhance the resolution and legibility of text within low-resolution (LR) images, consequently elevating recognition accuracy in Scene Text Recognition (STR).
2 code implementations • 20 Sep 2023 • Shengbin Yue, Wei Chen, Siyuan Wang, Bingxuan Li, Chenchen Shen, Shujun Liu, Yuxuan Zhou, Yao Xiao, Song Yun, Xuanjing Huang, Zhongyu Wei
We propose DISC-LawLLM, an intelligent legal system utilizing large language models (LLMs) to provide a wide range of legal services.
1 code implementation • 21 Aug 2023 • Katharina Prasse, Steffen Jung, Yuxuan Zhou, Margret Keuper
We propose a method specifically designed for hand action recognition which uses relative angular embeddings and local Spherical Harmonics to create novel hand representations.
1 code implementation • 2 Jun 2023 • Yuxuan Zhou, Ziyu Jin, Meiwei Li, Miao Li, Xien Liu, Xinxin You, Ji Wu
The NLI4CT task aims to entail hypotheses based on Clinical Trial Reports (CTRs) and retrieve the corresponding evidence supporting the justification.
1 code implementation • 19 May 2023 • Yuxuan Zhou, Zhi-Qi Cheng, Jun-Yan He, Bin Luo, Yifeng Geng, Xuansong Xie
As a remedy, we propose a threefold strategy: (1) We forge an innovative pathway that encodes bone connectivity by harnessing the power of graph distances.
1 code implementation • 17 Nov 2022 • Yuxuan Zhou, Zhi-Qi Cheng, Chao Li, Yanwen Fang, Yifeng Geng, Xuansong Xie, Margret Keuper
Skeleton-based action recognition aims to recognize human actions given human joint coordinates with skeletal interconnections.
Ranked #7 on Skeleton Based Action Recognition on NTU RGB+D 120
3 code implementations • ICCV 2023 • Wangmeng Xiang, Chao Li, Yuxuan Zhou, Biao Wang, Lei Zhang
More specifically, we employ a pre-trained large-scale language model as the knowledge engine to automatically generate text descriptions for body parts movements of actions, and propose a multi-modal training scheme by utilizing the text encoder to generate feature vectors for different body parts and supervise the skeleton encoder for action representation learning.
Ranked #5 on Skeleton Based Action Recognition on N-UCLA
1 code implementation • 15 Jun 2022 • Yuxuan Zhou, Wangmeng Xiang, Chao Li, Biao Wang, Xihan Wei, Lei Zhang, Margret Keuper, Xiansheng Hua
Unlike convolutional inductive biases, which are forced to focus exclusively on hard-coded local regions, our proposed SPs are learned by the model itself and take a variety of spatial relations into account.
Ranked #153 on Image Classification on ImageNet
1 code implementation • Findings (ACL) 2022 • Yuxuan Zhou, Xien Liu, Kaiyin Zhou, Ji Wu
The table-based fact verification task has recently gained widespread attention and yet remains to be a very challenging problem.
no code implementations • SEMEVAL 2021 • Yuxuan Zhou, Kaiyin Zhou, Xien Liu, Ji Wu, Xiaodan Zhu
This paper describes our system for verifying statements with tables at SemEval-2021 Task 9.
1 code implementation • NeurIPS 2020 • Robin Tibor Schirrmeister, Yuxuan Zhou, Tonio Ball, Dan Zhang
We refine previous investigations of this failure at anomaly detection for invertible generative networks and provide a clear explanation of it as a combination of model bias and domain prior: Convolutional networks learn similar low-level feature distributions when trained on any natural image dataset and these low-level features dominate the likelihood.