no code implementations • ECCV 2020 • Guangyi Chen, Yongming Rao, Jiwen Lu, Jie zhou
Specifically, we disentangle the video representation into the temporal coherence and motion parts and randomly change the scale of the temporal motion features as the adversarial noise.
no code implementations • ECCV 2020 • Guangyi Chen, Yuhao Lu, Jiwen Lu, Jie Zhou
Experimental results demonstrate that our DCML method explores credible and valuable training data and improves the performance of unsupervised domain adaptation.
no code implementations • 24 May 2024 • Zijian Li, Yifan Shen, Kaitao Zheng, Ruichu Cai, Xiangchen Song, Mingming Gong, Zhifeng Hao, Zhengmao Zhu, Guangyi Chen, Kun Zhang
To fill this gap, we propose an \textbf{ID}entification framework for instantane\textbf{O}us \textbf{L}atent dynamics (\textbf{IDOL}) by imposing a sparse influence constraint that the latent causal processes have sparse time-delayed and instantaneous relations.
1 code implementation • 22 Apr 2024 • Shiyi Zhang, Sule Bai, Guangyi Chen, Lei Chen, Jiwen Lu, Junle Wang, Yansong Tang
NAE is a more challenging task because it requires both narrative flexibility and evaluation rigor.
no code implementations • 20 Feb 2024 • Zijian Li, Ruichu Cai, Zhenhui Yang, Haiqin Huang, Guangyi Chen, Yifan Shen, Zhengming Chen, Xiangchen Song, Zhifeng Hao, Kun Zhang
To solve this problem, we propose to learn IDentifiable latEnt stAtes (IDEA) to detect when the distribution shifts occur.
no code implementations • 20 Feb 2024 • Yuke Li, Guangyi Chen, Ben Abramowitz, Stefano Anzellott, Donglai Wei
Moreover, we validate that the learned temporal dynamic transition and temporal dynamic generation modules possess transferable qualities.
1 code implementation • 20 Feb 2024 • Loka Li, Ignavier Ng, Gongxu Luo, Biwei Huang, Guangyi Chen, Tongliang Liu, Bin Gu, Kun Zhang
This discrepancy has motivated the development of federated causal discovery (FCD) approaches.
1 code implementation • 19 Feb 2024 • Loka Li, Zhenhao Chen, Guangyi Chen, Yixuan Zhang, Yusheng Su, Eric Xing, Kun Zhang
We have experimentally observed that LLMs possess the capability to understand the "confidence" in their own responses.
1 code implementation • 25 Jan 2024 • Guangyi Chen, Yifan Shen, Zhenhao Chen, Xiangchen Song, Yuewen Sun, Weiran Yao, Xiao Liu, Kun Zhang
Identifying the underlying time-delayed latent causal processes in sequential data is vital for grasping temporal dynamics and making downstream reasoning.
no code implementations • 22 Dec 2023 • Yuke Li, Lixiong Chen, Guangyi Chen, Ching-Yao Chan, Kun Zhang, Stefano Anzellotti, Donglai Wei
In order to predict a pedestrian's trajectory in a crowd accurately, one has to take into account her/his underlying socio-temporal interactions with other pedestrians consistently.
1 code implementation • 8 Nov 2023 • Zijian Li, Zunhong Xu, Ruichu Cai, Zhenhui Yang, Yuguang Yan, Zhifeng Hao, Guangyi Chen, Kun Zhang
Specifically, we first formulate the data generation process from the atom level to the molecular level, where the latent space is split into SI substructures, SR substructures, and SR atom variables.
1 code implementation • NeurIPS 2023 • Xiangchen Song, Weiran Yao, Yewen Fan, Xinshuai Dong, Guangyi Chen, Juan Carlos Niebles, Eric Xing, Kun Zhang
In unsupervised causal representation learning for sequential data with time-delayed latent causal influences, strong identifiability results for the disentanglement of causally-related latent variables have been established in stationary settings by leveraging temporal structure.
1 code implementation • NeurIPS 2023 • Zijian Li, Ruichu Cai, Guangyi Chen, Boyang Sun, Zhifeng Hao, Kun Zhang
To mitigate the need for these strict assumptions, we propose a subspace identification theory that guarantees the disentanglement of domain-invariant and domain-specific variables under less restrictive constraints regarding domain numbers and transformation properties, thereby facilitating domain adaptation by minimizing the impact of domain shifts on invariant variables.
1 code implementation • 24 Aug 2023 • Sheng Zhang, Muzammal Naseer, Guangyi Chen, Zhiqiang Shen, Salman Khan, Kun Zhang, Fahad Khan
To address this challenge, we propose the Self Structural Semantic Alignment (S^3A) framework, which extracts the structural semantic information from unlabeled data while simultaneously self-learning.
1 code implementation • ICCV 2023 • Guangyi Chen, Xiao Liu, Guangrun Wang, Kun Zhang, Philip H. S. Torr, Xiao-Ping Zhang, Yansong Tang
To bridge these gaps, in this paper, we propose Tem-Adapter, which enables the learning of temporal dynamics and complex semantics by a visual Temporal Aligner and a textual Semantic Aligner.
Ranked #1 on Video Question Answering on SUTD-TrafficQA
1 code implementation • 7 Jul 2023 • Xiao Liu, Guangyi Chen, Yansong Tang, Guangrun Wang, Xiao-Ping Zhang, Ser-Nam Lim
Composing simple elements into complex concepts is crucial yet challenging, especially for 3D action generation.
no code implementations • 10 Jun 2023 • Lingjing Kong, Shaoan Xie, Weiran Yao, Yujia Zheng, Guangyi Chen, Petar Stojanov, Victor Akinwande, Kun Zhang
In general, without further assumptions, the joint distribution of the features and the label is not identifiable in the target domain.
1 code implementation • CVPR 2023 • Lingjing Kong, Martin Q. Ma, Guangyi Chen, Eric P. Xing, Yuejie Chi, Louis-Philippe Morency, Kun Zhang
In this work, we formally characterize and justify existing empirical insights and provide theoretical guarantees of MAE.
1 code implementation • 10 May 2023 • Jiaqi Sun, Lin Zhang, Guangyi Chen, Kun Zhang, Peng Xu, Yujiu Yang
Graph neural networks aim to learn representations for graph-structured data and show impressive performance, particularly in node classification.
1 code implementation • CVPR 2023 • Guangyi Chen, Zhenhao Chen, Shunxing Fan, Kun Zhang
Specifically, we model the trajectory sampling as a Gaussian process and construct an acquisition function to measure the potential sampling value.
no code implementations • 11 Jan 2023 • Qiaosong Chu, Shuyan Li, Guangyi Chen, Kai Li, Xiu Li
Source-free object detection (SFOD) aims to transfer a detector pre-trained on a label-rich source domain to an unlabeled target domain without seeing source data.
1 code implementation • CVPR 2023 • Sheng Zhang, Salman Khan, Zhiqiang Shen, Muzammal Naseer, Guangyi Chen, Fahad Khan
The GNCD setting aims to categorize unlabeled training data coming from known and novel classes by leveraging the information of partially labeled known classes.
no code implementations • 24 Oct 2022 • Weiran Yao, Guangyi Chen, Kun Zhang
In this work, we establish the identifiability theories of nonparametric latent causal processes from their nonlinear mixtures under fixed temporal causal influences and analyze how distribution changes can further benefit the disentanglement.
1 code implementation • 3 Oct 2022 • Guangyi Chen, Weiran Yao, Xiangchen Song, Xinyue Li, Yongming Rao, Kun Zhang
To solve this problem, we propose to apply optimal transport to match the vision and text modalities.
1 code implementation • CVPR 2022 • Jinglin Xu, Yongming Rao, Xumin Yu, Guangyi Chen, Jie zhou, Jiwen Lu
Most existing action quality assessment methods rely on the deep features of an entire video to predict the score, which is less reliable due to the non-transparent inference process and poor interpretability.
1 code implementation • CVPR 2022 • Tianpei Gu, Guangyi Chen, Junlong Li, Chunze Lin, Yongming Rao, Jie zhou, Jiwen Lu
Human behavior has the nature of indeterminacy, which requires the pedestrian trajectory prediction system to model the multi-modality of future motion states.
no code implementations • 10 Feb 2022 • Weiran Yao, Guangyi Chen, Kun Zhang
Specifically, the framework factorizes unknown distribution shifts into transition distribution changes caused by fixed dynamics and time-varying latent causal relations, and by global changes in observation.
1 code implementation • CVPR 2022 • Yongming Rao, Wenliang Zhao, Guangyi Chen, Yansong Tang, Zheng Zhu, Guan Huang, Jie zhou, Jiwen Lu
In this work, we present a new framework for dense prediction by implicitly and explicitly leveraging the pre-trained knowledge from CLIP.
1 code implementation • ICCV 2021 • Yongming Rao, Guangyi Chen, Jiwen Lu, Jie zhou
Unlike most existing methods that learn visual attention based on conventional likelihood, we propose to learn the attention with counterfactual causality, which provides a tool to measure the attention quality and a powerful supervisory signal to guide the learning process.
Ranked #8 on Vehicle Re-Identification on VehicleID Medium
1 code implementation • 11 Aug 2021 • Guangyi Chen, Tianpei Gu, Jiwen Lu, Jin-An Bao, Jie zhou
Experimental results demonstrate the superiority of our method, which outperforms the state-of-the-art methods by a large margin with limited computational cost.
Ranked #21 on Person Re-Identification on MSMT17
1 code implementation • ICCV 2021 • Guangyi Chen, Junlong Li, Jiwen Lu, Jie zhou
Most existing methods learn to predict future trajectories by behavior clues from history trajectories and interaction clues from environments.
1 code implementation • ICCV 2021 • Guangyi Chen, Junlong Li, Nuoxing Zhou, Liangliang Ren, Jiwen Lu
In this paper, we present a distribution discrimination (DisDis) method to predict personalized motion patterns by distinguishing the potential distributions.
no code implementations • ICCV 2019 • Guangyi Chen, Chunze Lin, Liangliang Ren, Jiwen Lu, Jie Zhou
Unlike most existing methods which train the attention mechanism in a weakly-supervised manner and ignore the attention confidence level, we learn the attention with a critic which measures the attention quality and provides a powerful supervisory signal to guide the learning process.
1 code implementation • ICCV 2019 • Guangyi Chen, Tianren Zhang, Jiwen Lu, Jie Zhou
In this paper, we present a deep meta metric learning (DMML) approach for visual recognition.