1 code implementation • 31 Jul 2022 • Maosen Li, Siheng Chen, Zijing Zhang, Lingxi Xie, Qi Tian, Ya zhang
To address the first issue, we propose adaptive graph scattering, which leverages multiple trainable band-pass graph filters to decompose pose features into richer graph spectrum bands.
1 code implementation • CVPR 2022 • Chenxin Xu, Maosen Li, Zhenyang Ni, Ya zhang, Siheng Chen
From the aspect of interaction capturing, we propose a trainable multiscale hypergraph to capture both pair-wise and group-wise interactions at multiple group sizes.
no code implementations • 25 Aug 2021 • Maosen Li, Siheng Chen, Yangheng Zhao, Ya zhang, Yanfeng Wang, Qi Tian
The core of MST-GNN is a multiscale spatio-temporal graph that explicitly models the relations in motions at various spatial and temporal scales.
no code implementations • 2 Jul 2021 • Maosen Li, Siheng Chen, Yanning Shen, Genjia Liu, Ivor W. Tsang, Ya zhang
This paper considers predicting future statuses of multiple agents in an online fashion by exploiting dynamic interactions in the system.
1 code implementation • 31 Dec 2020 • Kun Wei, Cheng Deng, Xu Yang, Maosen Li
Different from traditional incremental classification networks, the semantic gap between the embedding spaces of two adjacent tasks is the main challenge for embedding networks under incremental learning setting.
1 code implementation • 17 Dec 2020 • Chenxin Xu, Siheng Chen, Maosen Li, Ya zhang
To handle the decomposition ambiguity in the teacher network, we propose a cycle-consistent architecture promoting a 3D rotation-invariant property to train the teacher network.
no code implementations • 3 Nov 2020 • Siheng Chen, Maosen Li, Ya zhang
Compared to previous analytical sampling and recovery, the proposed methods are able to flexibly learn a variety of graph signal models from data by leveraging the learning ability of neural networks; compared to previous neural-network-based sampling and recovery, the proposed methods are designed through exploiting specific graph properties and provide interpretability.
2 code implementations • NeurIPS 2020 • Maosen Li, Siheng Chen, Ya zhang, Ivor W. Tsang
Based on trainable hierarchical representations of a graph, GXN enables the interchange of intermediate features across scales to promote information flow.
1 code implementation • 28 Aug 2020 • Xu Chen, Jiangchao Yao, Maosen Li, Ya zhang, Yan-Feng Wang
Comprehensive results on both link sign prediction and node recommendation task demonstrate the effectiveness of DVE.
2 code implementations • CVPR 2020 • Maosen Li, Cheng Deng, Tengjiao Li, Junchi Yan, Xinbo Gao, Heng Huang
Furthermore, we regularize the targeted attack process with metric learning to take adversarial examples away from true label and gain more transferable targeted adversarial examples.
1 code implementation • CVPR 2020 • Maosen Li, Siheng Chen, Yangheng Zhao, Ya Zhang, Yanfeng Wang, Qi Tian
The core idea of DMGNN is to use a multiscale graph to comprehensively model the internal relations of a human body for motion feature learning.
1 code implementation • 17 Mar 2020 • Maosen Li, Siheng Chen, Yangheng Zhao, Ya zhang, Yan-Feng Wang, Qi Tian
The core idea of DMGNN is to use a multiscale graph to comprehensively model the internal relations of a human body for motion feature learning.
1 code implementation • 25 Nov 2019 • Chaoqin Huang, Fei Ye, Jinkun Cao, Maosen Li, Ya zhang, Cewu Lu
We here propose to break this equivalence by erasing selected attributes from the original data and reformulate it as a restoration task, where the normal and the anomalous data are expected to be distinguishable based on restoration errors.
Ranked #21 on Anomaly Detection on One-class CIFAR-10
no code implementations • 5 Oct 2019 • Maosen Li, Siheng Chen, Xu Chen, Ya zhang, Yan-Feng Wang, Qi Tian
For the backbone, we propose multi-branch multi-scale graph convolution networks to extract spatial and temporal features.
Ranked #39 on Skeleton Based Action Recognition on NTU RGB+D
1 code implementation • CVPR 2019 • Maosen Li, Siheng Chen, Xu Chen, Ya zhang, Yan-Feng Wang, Qi Tian
We validate AS-GCN in action recognition using two skeleton data sets, NTU-RGB+D and Kinetics.