1 code implementation • MSR (COLING) 2020 • Xiang Yu, Simon Tannert, Ngoc Thang Vu, Jonas Kuhn
We introduce the IMS contribution to the Surface Realization Shared Task 2020.
1 code implementation • UDW (COLING) 2020 • Tillmann Dönicke, Xiang Yu, Jonas Kuhn
The Universal Dependencies treebanks are a still-growing collection of treebanks for a wide range of languages, all annotated with a common inventory of dependency relations.
no code implementations • 26 Jan 2024 • Wenyuan Wang, Kaixin Yan, Xiang Yu
With the help of the duality results in the auxiliary problems and some fixed point arguments, we further derive and verify the optimal portfolio processes in a periodic manner for the original periodic evaluation problems over an infinite horizon.
no code implementations • 14 Jan 2024 • Linlin Zhang, Xiang Yu, Abdulateef Daud, Abdul Rashid Mussah, Yaw Adu-Gyamfi
This study implements a three-stage video analytics framework for extracting high-resolution traffic data such vehicle counts, speed, and acceleration from infrastructure-mounted CCTV cameras.
no code implementations • 13 Jan 2024 • Linlin Zhang, Xiang Yu, Armstrong Aboah, Yaw Adu-Gyamfi
These are the need for multiple LiDAR systems to obtain complete point cloud information of objects of interest, as well as the labor-intensive process of annotating 3D bounding boxes for object detection tasks.
no code implementations • 2 Dec 2023 • Salman S. Khan, Xiang Yu, Kaushik Mitra, Manmohan Chandraker, Francesco Pittaluga
OpEnCam encrypts the incoming light before capturing it using the modulating ability of optical masks.
no code implementations • 24 Nov 2023 • Lijun Bo, YiJie Huang, Xiang Yu
This paper studies an infinite horizon optimal tracking portfolio problem using capital injection in incomplete market models.
no code implementations • 21 Nov 2023 • Wenyuan Wang, Kaixin Yan, Xiang Yu
With the help of the results from the auxiliary problem, the value function and the optimal constrained portfolio for the original problem with periodic evaluation can be derived and verified, allowing us to discuss some financial implications under the new performance paradigm.
1 code implementation • 5 Nov 2023 • Jingru Yi, Burak Uzkent, Oana Ignat, Zili Li, Amanmeet Garg, Xiang Yu, Linda Liu
While we demonstrate our data augmentation method with MDETR framework, the proposed approach is applicable to common grounding-based vision and language tasks with other frameworks.
no code implementations • ICCV 2023 • Mateusz Michalkiewicz, Masoud Faraki, Xiang Yu, Manmohan Chandraker, Mahsa Baktashmotlagh
Overfitting to the source domain is a common issue in gradient-based training of deep neural networks.
no code implementations • ICCV 2023 • Di Liu, Xiang Yu, Meng Ye, Qilong Zhangli, Zhuowei Li, Zhixing Zhang, Dimitris N. Metaxas
Accurate 3D shape abstraction from a single 2D image is a long-standing problem in computer vision and graphics.
no code implementations • 23 Aug 2023 • Ronghang Zhu, Dongliang Guo, Daiqing Qi, Zhixuan Chu, Xiang Yu, Sheng Li
Inspired by the concepts in trustworthy AI, we proposed the first trustworthy representation learning across domains framework which includes four concepts, i. e, robustness, privacy, fairness, and explainability, to give a comprehensive literature review on this research direction.
no code implementations • 20 Aug 2023 • Sicheng Zhou, Meng Wang, Jindou Jia, Kexin Guo, Xiang Yu, Youmin Zhang, Lei Guo
This paper presents an excitation operator based fault separation architecture for a quadrotor unmanned aerial vehicle (UAV) subject to loss of effectiveness (LoE) faults, actuator aging, and load uncertainty.
no code implementations • 16 Aug 2023 • Lei Guo, Wenshuo Li, Yukai Zhu, Xiang Yu, Zidong Wang
State estimation has long been a fundamental problem in signal processing and control areas.
no code implementations • 28 Jun 2023 • Xiaoli Wei, Xiang Yu
This paper studies the q-learning, recently coined as the continuous time counterpart of Q-learning by Jia and Zhou (2023), for continuous time Mckean-Vlasov control problems in the setting of entropy-regularized reinforcement learning.
1 code implementation • CVPR 2023 • Zaid Khan, Vijay Kumar BG, Samuel Schulter, Xiang Yu, Yun Fu, Manmohan Chandraker
We introduce SelTDA (Self-Taught Data Augmentation), a strategy for finetuning large VLMs on small-scale VQA datasets.
no code implementations • CVPR 2023 • Jue Wang, Wentao Zhu, Pichao Wang, Xiang Yu, Linda Liu, Mohamed Omar, Raffay Hamid
To address this limitation, we present a novel Selective S4 (i. e., S5) model that employs a lightweight mask generator to adaptively select informative image tokens resulting in more efficient and accurate modeling of long-term spatiotemporal dependencies in videos.
Ranked #2 on Video Classification on Breakfast
no code implementations • 15 Feb 2023 • Shuoqing Deng, Xiang Yu, Jiacheng Zhang
When the sufficient condition of the attitude function is violated, we can illustrate by various examples that the characterization of the optimal equilibrium may differ significantly from some existing results for an individual agent.
no code implementations • European Conference on Computer Vision (ECCV) 2022 • Zaid Tasneem, Giovanni Milione, Yi-Hsuan Tsai, Xiang Yu, Ashok Veeraraghavan, Manmohan Chandraker, Francesco Pittaluga
With over a billion sold each year, cameras are not only becoming ubiquitous and omnipresent, but are driving progress in a wide range of applications such as augmented/virtual reality, robotics, surveillance, security, autonomous navigation and many others.
no code implementations • 3 Aug 2022 • Xiang Yu, Zhe Geng, Xiaohua Huang, Qinglu Wang, Daiyin Zhu
In recent years, convolutional neural networks (CNNs) have shown great potential in synthetic aperture radar (SAR) target recognition.
no code implementations • 27 Jun 2022 • Lijun Bo, Shihua Wang, Xiang Yu
This paper studies the equilibrium consumption under external habit formation in a large population of agents.
no code implementations • CVPR 2022 • Dripta S. Raychaudhuri, Yumin Suh, Samuel Schulter, Xiang Yu, Masoud Faraki, Amit K. Roy-Chowdhury, Manmohan Chandraker
In contrast to the existing dynamic multi-task approaches that adjust only the weights within a fixed architecture, our approach affords the flexibility to dynamically control the total computational cost and match the user-preferred task importance better.
1 code implementation • 27 Mar 2022 • Zaid Khan, Vijay Kumar BG, Xiang Yu, Samuel Schulter, Manmohan Chandraker, Yun Fu
Self-supervised vision-language pretraining from pure images and text with a contrastive loss is effective, but ignores fine-grained alignment due to a dual-stream architecture that aligns image and text representations only on a global level.
no code implementations • CVPR 2022 • Christian Simon, Masoud Faraki, Yi-Hsuan Tsai, Xiang Yu, Samuel Schulter, Yumin Suh, Mehrtash Harandi, Manmohan Chandraker
Humans have the ability to accumulate knowledge of new tasks in varying conditions, but deep neural networks often suffer from catastrophic forgetting of previously learned knowledge after learning a new task.
no code implementations • CVPR 2022 • Chang Liu, Xiang Yu, Yi-Hsuan Tsai, Ramin Moslemi, Masoud Faraki, Manmohan Chandraker, Yun Fu
Convolutional Neural Networks have achieved remarkable success in face recognition, in part due to the abundant availability of data.
no code implementations • ICCV 2021 • Donghyun Kim, Yi-Hsuan Tsai, Bingbing Zhuang, Xiang Yu, Stan Sclaroff, Kate Saenko, Manmohan Chandraker
Learning transferable and domain adaptive feature representations from videos is important for video-relevant tasks such as action recognition.
no code implementations • 2 Aug 2021 • Lijun Bo, Shihua Wang, Xiang Yu
This paper studies the n-player game and the mean field game under the CRRA relative performance on terminal wealth, in which the interaction occurs by peer competition.
no code implementations • CVPR 2021 • Masoud Faraki, Xiang Yu, Yi-Hsuan Tsai, Yumin Suh, Manmohan Chandraker
Intuitively, it discriminatively correlates explicit metrics derived from one domain, with triplet samples from another domain in a unified loss function to be minimized within a network, which leads to better alignment of the training domains.
no code implementations • 4 Feb 2021 • Dongrui Wu, Jiaxin Xu, Weili Fang, Yi Zhang, Liuqing Yang, Xiaodong Xu, Hanbin Luo, Xiang Yu
Physiological computing uses human physiological data as system inputs in real time.
no code implementations • 26 Jan 2021 • Yoav Alon, Xiang Yu, Huiyu Zhou
Synthetic generation of three-dimensional cell models from histopathological images aims to enhance understanding of cell mutation, and progression of cancer, necessary for clinical assessment and optimal treatment.
no code implementations • COLING 2020 • Tillmann D{\"o}nicke, Xiang Yu, Jonas Kuhn
This paper proposes a framework for the expression of typological statements which uses real-valued logics to capture the empirical truth value (truth degree) of a formula on a given data source, e. g. a collection of multilingual treebanks with comparable annotation.
no code implementations • 28 Nov 2020 • Junru Wu, Xiang Yu, Buyu Liu, Zhangyang Wang, Manmohan Chandraker
Face anti-spoofing (FAS) seeks to discriminate genuine faces from fake ones arising from any type of spoofing attack.
no code implementations • 24 Nov 2020 • Xiang Yu, Fuping Chu, Junqi Wu, Bo Huang
The recommendation system is an important commercial application of machine learning, where billions of feed views in the information flow every day.
no code implementations • 9 Oct 2020 • Yuqing Zhu, Xiang Yu, Yi-Hsuan Tsai, Francesco Pittaluga, Masoud Faraki, Manmohan Chandraker, Yu-Xiang Wang
Differentially Private Federated Learning (DPFL) is an emerging field with many applications.
no code implementations • ECCV 2020 • Aruni RoyChowdhury, Xiang Yu, Kihyuk Sohn, Erik Learned-Miller, Manmohan Chandraker
While deep face recognition has benefited significantly from large-scale labeled data, current research is focused on leveraging unlabeled data to further boost performance, reducing the cost of human annotation.
no code implementations • ACL 2020 • Xiang Yu, Simon Tannert, Ngoc Thang Vu, Jonas Kuhn
We propose a graph-based method to tackle the dependency tree linearization task.
no code implementations • WS 2020 • Xiang Yu, Ngoc Thang Vu, Jonas Kuhn
We present an iterative data augmentation framework, which trains and searches for an optimal ensemble and simultaneously annotates new training data in a self-training style.
no code implementations • 24 Jun 2020 • Lijun Bo, Huafu Liao, Xiang Yu
We first transform the original problem with floor constraints into an unconstrained control problem, however, under a running maximum cost.
no code implementations • 12 Jun 2020 • Shuoqing Deng, Xun Li, Huyen Pham, Xiang Yu
This paper studies the infinite-horizon optimal consumption with a path-dependent reference under exponential utility.
no code implementations • 9 Jun 2020 • Xiang Yu, Yibin Fu, Hui-Hui Dai
Based on previous work for the static problem, in this paper we first derive one form of dynamic finite-strain shell equations for incompressible hyperelastic materials that involve three shell constitutive relations.
Computational Engineering, Finance, and Science Biological Physics
no code implementations • CVPR 2020 • Yichun Shi, Xiang Yu, Kihyuk Sohn, Manmohan Chandraker, Anil K. Jain
Recognizing wild faces is extremely hard as they appear with all kinds of variations.
no code implementations • 7 Dec 2019 • Junru Wu, Xiang Yu, Ding Liu, Manmohan Chandraker, Zhangyang Wang
To train and evaluate on more diverse blur severity levels, we propose a Challenging DVD dataset generated from the raw DVD video set by pooling frames with different temporal windows.
no code implementations • WS 2019 • Xiang Yu, Agnieszka Falenska, Marina Haid, Ngoc Thang Vu, Jonas Kuhn
We introduce the IMS contribution to the Surface Realization Shared Task 2019.
no code implementations • WS 2019 • Xiang Yu, Agnieszka Falenska, Ngoc Thang Vu, Jonas Kuhn
We present a dependency tree linearization model with two novel components: (1) a tree-structured encoder based on bidirectional Tree-LSTM that propagates information first bottom-up then top-down, which allows each token to access information from the entire tree; and (2) a linguistically motivated head-first decoder that emphasizes the central role of the head and linearizes the subtree by incrementally attaching the dependents on both sides of the head.
no code implementations • 20 Sep 2019 • Zhuo Jin, Huafu Liao, Yue Yang, Xiang Yu
This paper studies the optimal dividend for a multi-line insurance group, in which each subsidiary runs a product line and is exposed to some external credit risk.
no code implementations • 3 Sep 2019 • Junbeom Lee, Xiang Yu, Chao Zhou
This paper aims to make a new contribution to the study of lifetime ruin problem by considering investment in two hedge funds with high-watermark fees and drift uncertainty.
no code implementations • WS 2019 • Xiang Yu, Ngoc Thang Vu, Jonas Kuhn
The generalized Dyck language has been used to analyze the ability of Recurrent Neural Networks (RNNs) to learn context-free grammars (CFGs).
no code implementations • 24 Jul 2019 • Feng-Ju Chang, Xiang Yu, Ram Nevatia, Manmohan Chandraker
We address the challenging problem of generating facial attributes using a single image in an unconstrained pose.
no code implementations • 4 Jun 2019 • Yu-Jui Huang, Xiang Yu
This allows us to capture much more diverse behavior, depending on an agent's ambiguity attitude, beyond the standard worst-case (or best-case) analysis.
no code implementations • 20 May 2019 • Lijun Bo, Huafu Liao, Xiang Yu
The verification theorem can be concluded with the aid of our BSDE results, which in turn yields the uniqueness of the solution to the BSDE.
no code implementations • ICLR 2019 • Kihyuk Sohn, Wenling Shang, Xiang Yu, Manmohan Chandraker
Unsupervised domain adaptation is a promising avenue to enhance the performance of deep neural networks on a target domain, using labels only from a source domain.
1 code implementation • 8 Nov 2018 • Xiaoshi Zhong, Xiang Yu, Erik Cambria, Jagath C. Rajapakse
Entities have different forms in different linguistic tasks and researchers treat those different forms as different concepts.
no code implementations • 7 Nov 2018 • Agnieszka Falenska, Anders Björkelund, Xiang Yu, Jonas Kuhn
In this paper we show which components of the system were the most responsible for its final performance.
no code implementations • WS 2018 • Xiang Yu, Ngoc Thang Vu, Jonas Kuhn
We present a general approach with reinforcement learning (RL) to approximate dynamic oracles for transition systems where exact dynamic oracles are difficult to derive.
1 code implementation • CONLL 2018 • Matthias Blohm, Glorianna Jagfeld, Ekta Sood, Xiang Yu, Ngoc Thang Vu
We propose a machine reading comprehension model based on the compare-aggregate framework with two-staged attention that achieves state-of-the-art results on the MovieQA question answering dataset.
no code implementations • 23 Mar 2018 • Xi Yin, Xiang Yu, Kihyuk Sohn, Xiaoming Liu, Manmohan Chandraker
In this paper, we propose a center-based feature transfer framework to augment the feature space of under-represented subjects from the regular subjects that have sufficiently diverse samples.
1 code implementation • CVPR 2019 • Luan Tran, Kihyuk Sohn, Xiang Yu, Xiaoming Liu, Manmohan Chandraker
Recent developments in deep domain adaptation have allowed knowledge transfer from a labeled source domain to an unlabeled target domain at the level of intermediate features or input pixels.
no code implementations • 8 Jan 2018 • Chi Li, M. Zeeshan Zia, Quoc-Huy Tran, Xiang Yu, Gregory D. Hager, Manmohan Chandraker
In this work, we explore an approach for injecting prior domain structure into neural network training by supervising hidden layers of a CNN with intermediate concepts that normally are not observed in practice.
no code implementations • NeurIPS 2017 • Guobin Chen, Wongun Choi, Xiang Yu, Tony Han, Manmohan Chandraker
In this work, we propose a new framework to learn compact and fast ob- ject detection networks with improved accuracy using knowledge distillation [20] and hint learning [34].
no code implementations • ICCV 2017 • Kihyuk Sohn, Sifei Liu, Guangyu Zhong, Xiang Yu, Ming-Hsuan Yang, Manmohan Chandraker
Despite rapid advances in face recognition, there remains a clear gap between the performance of still image-based face recognition and video-based face recognition, due to the vast difference in visual quality between the domains and the difficulty of curating diverse large-scale video datasets.
no code implementations • CONLL 2017 • Anders Bj{\"o}rkelund, Agnieszka Falenska, Xiang Yu, Jonas Kuhn
This paper presents the IMS contribution to the CoNLL 2017 Shared Task.
1 code implementation • WS 2017 • Xiang Yu, Agnieszka Faleńska, Ngoc Thang Vu
We present a general-purpose tagger based on convolutional neural networks (CNN), used for both composing word vectors and encoding context information.
1 code implementation • ACL 2017 • Xiang Yu, Ngoc Thang Vu
We present a transition-based dependency parser that uses a convolutional neural network to compose word representations from characters.
no code implementations • ICCV 2017 • Xi Yin, Xiang Yu, Kihyuk Sohn, Xiaoming Liu, Manmohan Chandraker
Despite recent advances in face recognition using deep learning, severe accuracy drops are observed for large pose variations in unconstrained environments.
no code implementations • ICCV 2017 • Xi Peng, Xiang Yu, Kihyuk Sohn, Dimitris Metaxas, Manmohan Chandraker
Finally, we propose a new feature reconstruction metric learning to explicitly disentangle identity and pose, by demanding alignment between the feature reconstructions through various combinations of identity and pose features, which is obtained from two images of the same subject.
no code implementations • CVPR 2017 • Chi Li, M. Zeeshan Zia, Quoc-Huy Tran, Xiang Yu, Gregory D. Hager, Manmohan Chandraker
Monocular 3D object parsing is highly desirable in various scenarios including occlusion reasoning and holistic scene interpretation.
no code implementations • 3 May 2016 • Xiang Yu, Feng Zhou, Manmohan Chandraker
We propose a novel cascaded framework, namely deep deformation network (DDN), for localizing landmarks in non-rigid objects.
no code implementations • LREC 2014 • Bo Liu, Jingjing Liu, Xiang Yu, Dimitris Metaxas, Carol Neidle
Essential grammatical information is conveyed in signed languages by clusters of events involving facial expressions and movements of the head and upper body.