1 code implementation • 1 Feb 2024 • Victor Chernozhukov, Iván Fernández-Val, Chen Huang, Weining Wang
We show that weak dependence along the panel's time series dimension naturally implies approximate sparsity of the most informative moment conditions, motivating the following approach to remove the bias: First, apply LASSO to the cross-section data at each time period to construct most informative (and cross-fitted) instruments, using lagged values of suitable covariates.
no code implementations • 2 Dec 2023 • Likai Chen, Georg Keilbar, Liangjun Su, Weining Wang
We find that in the Gaussian approximations to the test statistics, the dependence structures in the data can be safely ignored due to the localized nature of the statistics.
1 code implementation • NeurIPS 2023 • Mingzhen Sun, Weining Wang, Zihan Qin, Jiahui Sun, Sihan Chen, Jing Liu
Specifically, we propose a video auto-encoder, where a video encoder encodes videos into global features, and a video decoder, built on a diffusion model, decodes the global features and synthesizes video frames in a non-autoregressive manner.
1 code implementation • 17 Apr 2023 • Sihan Chen, Xingjian He, Longteng Guo, Xinxin Zhu, Weining Wang, Jinhui Tang, Jing Liu
Different from widely-studied vision-language pretraining models, VALOR jointly models relationships of vision, audio and language in an end-to-end manner.
Ranked #1 on Video Captioning on VATEX (using extra training data)
1 code implementation • 29 Mar 2023 • Jiawei Liu, Weining Wang, Sihan Chen, Xinxin Zhu, Jing Liu
In this work, we concentrate on a rarely investigated problem of text guided sounding video generation and propose the Sounding Video Generator (SVG), a unified framework for generating realistic videos along with audio signals.
2 code implementations • CVPR 2023 • Mingzhen Sun, Weining Wang, Xinxin Zhu, Jing Liu
Experimental results demonstrate that our method achieves new state-of-the-art performance on five challenging benchmarks for video prediction and unconditional video generation: BAIR, RoboNet, KTH, KITTI and UCF101.
1 code implementation • journal 2023 • Yepeng Tang, Weining Wang, Yanwu Yang, Chunjie Zhang, Jing Liu
The TCM aggregates the temporal context information and provides features for the IBM and the FPBM.
no code implementations • 23 Aug 2022 • Matias D. Cattaneo, Richard K. Crump, Weining Wang
Beta-sorted portfolios -- portfolios comprised of assets with similar covariation to selected risk factors -- are a popular tool in empirical finance to analyze models of (conditional) expected returns.
no code implementations • 8 May 2022 • Toru Kitagawa, Weining Wang, Mengshan Xu
This paper develops a novel method for policy choice in a dynamic setting where the available data is a multivariate time series.
1 code implementation • 24 Mar 2022 • Qi Li, Weining Wang, Chengzhong Xu, Zhenan Sun
In addition, semantic information is introduced into the semantic-guided fusion module to control the swapped area and model the pose and expression more accurately.
no code implementations • 27 Jan 2022 • Georg Keilbar, Juan M. Rodriguez-Poo, Alexandra Soberon, Weining Wang
We derive its asymptotic properties by finding out that the limiting distribution has a discontinuity, depending on the explanatory power of our basis functions which is expressed by the variance of the error of the factor loadings.
1 code implementation • 15 Nov 2021 • Xiu Xu, Weining Wang, Yongcheol Shin, Chaowen Zheng
We propose a dynamic network quantile regression model to investigate the quantile connectedness using a predetermined network information.
no code implementations • 6 Sep 2021 • Xingjian He, Weining Wang, Zhiyong Xu, Hao Wang, Jie Jiang, Jing Liu
Compared with image scene parsing, video scene parsing introduces temporal information, which can effectively improve the consistency and accuracy of prediction.
2 code implementations • 1 Jul 2021 • Jing Liu, Xinxin Zhu, Fei Liu, Longteng Guo, Zijia Zhao, Mingzhen Sun, Weining Wang, Hanqing Lu, Shiyu Zhou, Jiajun Zhang, Jinqiao Wang
In this paper, we propose an Omni-perception Pre-Trainer (OPT) for cross-modal understanding and generation, by jointly modeling visual, text and audio resources.
Ranked #1 on Image Retrieval on Localized Narratives
no code implementations • 29 Jun 2021 • Xingqun Qi, Muyi Sun, Weining Wang, Xiaoxiao Dong, Qi Li, Caifeng Shan
To tackle these challenges, we propose a novel Semantic-Driven Generative Adversarial Network (SDGAN) which embeds global structure-level style injection and local class-level knowledge re-weighting.
no code implementations • 16 May 2021 • Victor Chernozhukov, Chen Huang, Weining Wang
We propose employing a debiased-regularized, high-dimensional generalized method of moments (GMM) framework to perform inference on large-scale spatial panel networks.
1 code implementation • 17 Feb 2021 • Hao Wang, Weining Wang, Jing Liu
Video semantic segmentation requires to utilize the complex temporal relations between frames of the video sequence.
Ranked #1 on Video Semantic Segmentation on Cityscapes val
no code implementations • ICCV 2021 • Fei Liu, Jing Liu, Weining Wang, Hanqing Lu
Specifically, we present a novel graph memory mechanism to perform relational reasoning, and further develop two types of graph memory: a) visual graph memory that leverages visual information of video for relational reasoning; b) semantic graph memory that is specifically designed to explicitly leverage semantic knowledge contained in the classes and attributes of video objects, and perform relational reasoning in the semantic space.
no code implementations • 16 Dec 2020 • Xinxin Zhu, Weining Wang, Longteng Guo, Jing Liu
The whole process involves a visual understanding module and a language generation module, which brings more challenges to the design of deep neural networks than other tasks.
no code implementations • 23 Sep 2020 • Ai Jun Hou, Weining Wang, Cathy Y. H. Chen, Wolfgang Karl Härdle
We show how the proposed pricing mechanism underlines the importance of jumps in cryptocurrency markets.
no code implementations • CVPR 2019 • Weining Wang, Yan Huang, Liang Wang
Current studies on action detection in untrimmed videos are mostly designed for action classes, where an action is described at word level such as jumping, tumbling, swing, etc.