1 code implementation • 21 Oct 2023 • Yanpeng Zhao, Ivan Titov
We consider a zero-shot transfer learning setting where a model is trained on the source domain and is directly applied to target domains, without any further training.
no code implementations • 30 Apr 2023 • Yanpeng Zhao, Siyu Gao, Yunbo Wang, Xiaokang Yang
The voxel features and global features are complementary and are both leveraged by a compositional NeRF decoder for volume rendering.
no code implementations • CVPR 2022 • Rowan Zellers, Jiasen Lu, Ximing Lu, Youngjae Yu, Yanpeng Zhao, Mohammadreza Salehi, Aditya Kusupati, Jack Hessel, Ali Farhadi, Yejin Choi
Given a video, we replace snippets of text and audio with a MASK token; the model learns by choosing the correct masked-out snippet.
Ranked #6 on Action Classification on Kinetics-600 (using extra training data)
1 code implementation • NAACL 2022 • Yanpeng Zhao, Jack Hessel, Youngjae Yu, Ximing Lu, Rowan Zellers, Yejin Choi
In a difficult zero-shot setting with no paired audio-text data, our model demonstrates state-of-the-art zero-shot performance on the ESC50 and US8K audio classification tasks, and even surpasses the supervised state of the art for Clotho caption retrieval (with audio queries) by 2. 2\% R@1.
1 code implementation • ACL 2021 • Songlin Yang, Yanpeng Zhao, Kewei Tu
Neural lexicalized PCFGs (L-PCFGs) have been shown effective in grammar induction.
1 code implementation • NAACL 2021 • Songlin Yang, Yanpeng Zhao, Kewei Tu
In this work, we present a new parameterization form of PCFGs based on tensor decomposition, which has at most quadratic computational complexity in the symbol number and therefore allows us to use a much larger number of symbols.
no code implementations • EACL 2021 • Kewei Tu, Yong Jiang, Wenjuan Han, Yanpeng Zhao
Unsupervised parsing learns a syntactic parser from training sentences without parse tree annotations.
2 code implementations • EACL (AdaptNLP) 2021 • Yanpeng Zhao, Ivan Titov
Compound probabilistic context-free grammars (C-PCFGs) have recently established a new state of the art for unsupervised phrase-structure grammar induction.
1 code implementation • EMNLP 2020 • Yanpeng Zhao, Ivan Titov
In this work, we study visually grounded grammar induction and learn a constituency parser from both unlabeled text and its visual groundings.
1 code implementation • 1 May 2020 • Yanpeng Zhao, Ivan Titov
Nominal roles are not labeled in the training data, and the learning objective instead pushes the labeler to assign roles predictive of the arguments.
no code implementations • 13 Aug 2018 • Yanpeng Zhao, Wei Bi, Deng Cai, Xiaojiang Liu, Kewei Tu, Shuming Shi
Then, by recombining the content with the target style, we decode a sentence aligned in the target domain.
1 code implementation • ACL 2018 • Yanpeng Zhao, Liwen Zhang, Kewei Tu
We introduce Latent Vector Grammars (LVeGs), a new framework that extends latent variable grammars such that each nonterminal symbol is associated with a continuous vector space representing the set of (infinitely many) subtypes of the nonterminal.
1 code implementation • ICCV 2017 • Chen Zhu, Yanpeng Zhao, Shuaiyi Huang, Kewei Tu, Yi Ma
In this paper, we demonstrate the importance of encoding such relations by showing the limited effective receptive field of ResNet on two datasets, and propose to model the visual attention as a multivariate distribution over a grid-structured Conditional Random Field on image regions.