no code implementations • 8 Dec 2023 • Jonas Schult, Sam Tsai, Lukas Höllein, Bichen Wu, Jialiang Wang, Chih-Yao Ma, Kunpeng Li, Xiaofang Wang, Felix Wimbauer, Zijian He, Peizhao Zhang, Bastian Leibe, Peter Vajda, Ji Hou
Central to our approach is a user-defined 3D semantic proxy room that outlines a rough room layout based on semantic bounding boxes and a textual description of the overall room style.
no code implementations • 27 Sep 2023 • Xiaoliang Dai, Ji Hou, Chih-Yao Ma, Sam Tsai, Jialiang Wang, Rui Wang, Peizhao Zhang, Simon Vandenhende, Xiaofang Wang, Abhimanyu Dubey, Matthew Yu, Abhishek Kadian, Filip Radenovic, Dhruv Mahajan, Kunpeng Li, Yue Zhao, Vladan Petrovic, Mitesh Kumar Singh, Simran Motwani, Yi Wen, Yiwen Song, Roshan Sumbaly, Vignesh Ramanathan, Zijian He, Peter Vajda, Devi Parikh
Training text-to-image models with web scale image-text pairs enables the generation of a wide range of visual concepts from text.
no code implementations • 2 May 2022 • Xiaofang Wang, Kris M. Kitani
While progress has been encouraging, we observe an overlooked issue: it is not yet common practice to compare different 3D detectors under the same cost, e. g., inference latency.
no code implementations • 25 Feb 2022 • Haitao Liu, Kai Wu, Yew-Soon Ong, Chao Bian, Xiaomo Jiang, Xiaofang Wang
Multi-task Gaussian process (MTGP) is a well-known non-parametric Bayesian model for learning correlated tasks effectively by transferring knowledge across tasks.
no code implementations • 20 Sep 2021 • Haitao Liu, Jiaqi Ding, Xinyu Xie, Xiaomo Jiang, Yusong Zhao, Xiaofang Wang
Multi-task regression attempts to exploit the task similarity in order to achieve knowledge transfer across related tasks for performance improvement.
no code implementations • 3 Jun 2021 • Haitao Liu, Changjun Liu, Xiaomo Jiang, Xudong Chen, Shuhua Yang, Xiaofang Wang
Thereafter, we first investigate the methodological characteristics of the proposed deep probabilistic sequence model on toy cases, and then comprehensively demonstrate the superiority of our model against existing deep probabilistic SSM models through extensive numerical experiments on eight system identification benchmarks from various dynamic systems.
no code implementations • 13 May 2021 • Xiaofang Wang, Shengcao Cao, Mengtian Li, Kris M. Kitani
To facilitate the application to gradient-based algorithms, we also propose a differentiable representation for the neighborhood of architectures.
no code implementations • 7 Mar 2021 • Shengcao Cao, Xiaofang Wang, Kris Kitani
Using a sampling-based search algorithm and parallel computing, our method can find an architecture which is better than DARTS and with an 80% reduction in wall-clock search time.
no code implementations • ICLR 2022 • Xiaofang Wang, Dan Kondratyuk, Eric Christiansen, Kris M. Kitani, Yair Alon, Elad Eban
Committee-based models (ensembles or cascades) construct models by combining existing pre-trained ones.
1 code implementation • 29 Aug 2020 • Haitao Liu, Yew-Soon Ong, Xiaomo Jiang, Xiaofang Wang
For a learning task, Gaussian process (GP) is interested in learning the statistical relationship between inputs and outputs, since it offers not only the prediction mean but also the associated variability.
no code implementations • ECCV 2020 • Xiaofang Wang, Xuehan Xiong, Maxim Neumann, AJ Piergiovanni, Michael S. Ryoo, Anelia Angelova, Kris M. Kitani, Wei Hua
The discovered attention cells can be seamlessly inserted into existing backbone networks, e. g., I3D or S3D, and improve video classification accuracy by more than 2% on both Kinetics-600 and MiT datasets.
1 code implementation • 18 May 2020 • Haitao Liu, Yew-Soon Ong, Xiaomo Jiang, Xiaofang Wang
Deep kernel learning (DKL) leverages the connection between Gaussian process (GP) and neural networks (NN) to build an end-to-end, hybrid model.
no code implementations • CVPR 2019 • Yuting Liu, Miaojing Shi, Qijun Zhao, Xiaofang Wang
In the end, we propose a curriculum learning strategy to train the network from images of relatively accurate and easy pseudo ground truth first.
2 code implementations • ICLR 2019 • Shengcao Cao, Xiaofang Wang, Kris M. Kitani
We also demonstrate that the learned embedding space can be transferred to new settings for architecture search, such as a larger teacher network or a teacher network in a different architecture family, without any training.
no code implementations • 26 Sep 2018 • Maxime Petit, Amaury Depierre, Xiaofang Wang, Emmanuel Dellandréa, Liming Chen
In simulation, we demonstrate the benefit of the transfer learning based on visual similarity, as opposed to an amnesic learning (i. e. learning from scratch all the time).
no code implementations • 6 Aug 2018 • Xiang Xu, Xiaofang Wang, Kris M. Kitani
We propose to use the concept of the Hamming bound to derive the optimal criteria for learning hash codes with a deep network.
no code implementations • 9 Mar 2018 • Xiaofang Wang, Guoqiang Xiang, Xinyue Zhang, Wei Wei
In this paper, a framework of video face replacement is proposed and it deals with the flicker of swapped face in video sequence.
no code implementations • 9 Jan 2018 • Yu-Xing Tang, Josiah Wang, Xiaofang Wang, Boyang Gao, Emmanuel Dellandrea, Robert Gaizauskas, Liming Chen
This is done by modeling the differences between the two on categories with both image-level and bounding box annotations, and transferring this information to convert classifiers to detectors for categories without bounding box annotations.
no code implementations • 28 Dec 2017 • Lingkun Luo, Liming Chen, Shiqiang Hu, Ying Lu, Xiaofang Wang
Domain adaptation (DA) aims to generalize a learning model across training and testing data despite the mismatch of their data distributions.
no code implementations • 24 May 2017 • Lingkun Luo, Xiaofang Wang, Shiqiang Hu, Liming Chen
Domain adaptation (DA) is transfer learning which aims to leverage labeled data in a related source domain to achieve informed knowledge transfer and help the classification of unlabeled data in a target domain.
no code implementations • 13 Apr 2017 • Lingkun Luo, Xiaofang Wang, Shiqiang Hu, Chao Wang, Yu-Xing Tang, Liming Chen
Most previous research tackle this problem in seeking a shared feature representation between source and target domains while reducing the mismatch of their data distributions.
1 code implementation • 12 Dec 2016 • Xiaofang Wang, Yi Shi, Kris M. Kitani
The current state-of-the-art deep hashing method DPSH~\cite{li2015feature}, which is based on pairwise labels, performs image feature learning and hash code learning simultaneously by maximizing the likelihood of pairwise similarities.
no code implementations • 8 Dec 2016 • Xiaofang Wang, Kris M. Kitani, Martial Hebert
Given a query image, a second positive image and a third negative image, dissimilar to the first two images, we define a contextualized similarity search criteria.