no code implementations • ICML 2020 • Tian-Zuo Wang, Xi-Zhu Wu, Sheng-Jun Huang, Zhi-Hua Zhou
In many real tasks, we care about how to make decisions other than mere predictions on an event, e. g. how to increase the revenue next month instead of knowing it will drop.
no code implementations • 7 May 2024 • Hamed Hemati, Lorenzo Pellegrini, Xiaotian Duan, Zixuan Zhao, Fangfang Xia, Marc Masana, Benedikt Tscheschner, Eduardo Veas, Yuxiang Zheng, Shiji Zhao, Shao-Yuan Li, Sheng-Jun Huang, Vincenzo Lomonaco, Gido M. van de Ven
Continual learning (CL) provides a framework for training models in ever-evolving environments.
no code implementations • 9 Apr 2024 • Ming-Kun Xie, Jia-Hao Xiao, Pei Peng, Gang Niu, Masashi Sugiyama, Sheng-Jun Huang
In this paper, we provide a causal inference framework to show that the correlative features caused by the target object and its co-occurring objects can be regarded as a mediator, which has both positive and negative impacts on model predictions.
no code implementations • 23 Feb 2024 • Chen-Chen Zong, Ye-Wen Wang, Kun-Peng Ning, Haibo Ye, Sheng-Jun Huang
In this paper, we attempt to query examples that are both likely from known classes and highly informative, and propose a \textit{Bidirectional Uncertainty-based Active Learning} (BUAL) framework.
no code implementations • 6 Feb 2024 • Jing-Cheng Pang, Heng-Bo Fan, Pengyuan Wang, Jia-Hao Xiao, Nan Tang, Si-Hang Yang, Chengxing Jia, Sheng-Jun Huang, Yang Yu
The rise of large language models (LLMs) has revolutionized the way that we interact with artificial intelligence systems through natural language.
1 code implementation • 13 Jan 2024 • Chen-Chen Zong, Ye-Wen Wang, Ming-Kun Xie, Sheng-Jun Huang
Learning with noisy labels can significantly hinder the generalization performance of deep neural networks (DNNs).
1 code implementation • 31 Aug 2023 • Yuyan Zhou, Dong Liang, Songcan Chen, Sheng-Jun Huang, Shuo Yang, Chongyi Li
In this paper, we propose a solution to improve the performance of lens flare removal by revisiting the ISP and remodeling the principle of automatic exposure in the synthesis pipeline and design a more reliable light sources recovery strategy.
1 code implementation • ICCV 2023 • Penghui Yang, Ming-Kun Xie, Chen-Chen Zong, Lei Feng, Gang Niu, Masashi Sugiyama, Sheng-Jun Huang
Existing knowledge distillation methods typically work by imparting the knowledge of output logits or intermediate feature maps from the teacher network to the student network, which is very successful in multi-class single-label learning.
no code implementations • 7 May 2023 • Wenhai Wan, Xinrui Wang, Ming-Kun Xie, Shao-Yuan Li, Sheng-Jun Huang, Songcan Chen
Learning from noisy data has attracted much attention, where most methods focus on closed-set label noise.
1 code implementation • 4 May 2023 • Ming-Kun Xie, Jia-Hao Xiao, Hao-Zhe Liu, Gang Niu, Masashi Sugiyama, Sheng-Jun Huang
Pseudo-labeling has emerged as a popular and effective approach for utilizing unlabeled data.
no code implementations • 28 Apr 2023 • Ling Li, Dong Liang, Yuanhang Gao, Sheng-Jun Huang, Songcan Chen
In this paper, we propose a new paradigm, i. e., aesthetics-guided low-light image enhancement (ALL-E), which introduces aesthetic preferences to LLE and motivates training in a reinforcement learning framework with an aesthetic reward.
no code implementations • 3 Mar 2023 • Ye Li, Song-Can Chen, Sheng-Jun Huang
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems, but they are still trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
1 code implementation • ICCV 2023 • Yuyan Zhou, Dong Liang, Songcan Chen, Sheng-Jun Huang, Shuo Yang, Chongyi Li
In this paper, we propose a solution to improve the performance of lens flare removal by revisiting the ISP and remodeling the principle of automatic exposure in the synthesis pipeline and design a more reliable light sources recovery strategy.
1 code implementation • 6 Dec 2022 • Dong Liang, Jing-Wei Zhang, Ying-Peng Tang, Sheng-Jun Huang
However, existing active learning methods are mainly with class-balanced settings and image-based querying for generic object detection tasks, which are less applicable to aerial object detection scenarios due to the long-tailed class distribution and dense small objects in aerial scenes.
1 code implementation • 3 Sep 2022 • Chen-Chen Zong, Zheng-Tao Cao, Hong-Tao Guo, Yun Du, Ming-Kun Xie, Shao-Yuan Li, Sheng-Jun Huang
Deep neural networks trained with standard cross-entropy loss are more prone to memorize noisy labels, which degrades their performance.
no code implementations • 26 Aug 2022 • Bo-Shi Zou, Ming-Kun Xie, Sheng-Jun Huang
In this paper, we propose a novel framework for partial label learning with meta objective guided disambiguation (MoGD), which aims to recover the ground-truth label from candidate labels set by solving a meta objective on a small validation set.
no code implementations • 6 Jul 2022 • Feng Sun, Ming-Kun Xie, Sheng-Jun Huang
In this paper, we study the partial multi-label (PML) image classification problem, where each image is annotated with a candidate label set consists of multiple relevant labels and other noisy labels.
1 code implementation • 31 Jan 2022 • Lue Tao, Lei Feng, Hongxin Wei, JinFeng Yi, Sheng-Jun Huang, Songcan Chen
Under this threat, we show that adversarial training using a conventional defense budget $\epsilon$ provably fails to provide test robustness in a simple statistical setting, where the non-robust features of the training data can be reinforced by $\epsilon$-bounded perturbation.
1 code implementation • CVPR 2022 • Kun-Peng Ning, Xun Zhao, Yu Li, Sheng-Jun Huang
To tackle this open-set annotation (OSA) problem, we propose a new active learning framework called LfOSA, which boosts the classification performance with an effective sampling strategy to precisely detect examples from known classes for annotation.
no code implementations • NeurIPS 2021 • Ming-Kun Xie, Sheng-Jun Huang
However, the supervised information of pairwise relevance ordering is less informative than exact labels.
no code implementations • 11 Jul 2021 • Ye Shi, Shao-Yuan Li, Sheng-Jun Huang
Traditional supervised learning requires ground truth labels for the training data, whose collection can be difficult in many cases.
no code implementations • 16 May 2021 • Ming-Kun Xie, Sheng-Jun Huang
Class-conditional noise commonly exists in machine learning tasks, where the class label is corrupted with a probability depending on its ground-truth.
no code implementations • 27 Mar 2021 • Kun-Peng Ning, Lue Tao, Songcan Chen, Sheng-Jun Huang
Recently, much research has been devoted to improving the model robustness by training with noise perturbations.
no code implementations • 27 Mar 2021 • Kun-Peng Ning, Hu Xu, Kun Zhu, Sheng-Jun Huang
Imitation learning is a primary approach to improve the efficiency of reinforcement learning by exploiting the expert demonstrations.
2 code implementations • NeurIPS 2021 • Lue Tao, Lei Feng, JinFeng Yi, Sheng-Jun Huang, Songcan Chen
Delusive attacks aim to substantially deteriorate the test accuracy of the learning model by slightly perturbing the features of correctly labeled training examples.
no code implementations • 14 Jun 2020 • Kun-Peng Ning, Sheng-Jun Huang
In this paper, we propose a novel framework to adaptively learn the policy by jointly interacting with the environment and exploiting the expert demonstrations.
3 code implementations • 12 Jan 2019 • Ying-Peng Tang, Guo-Xiang Li, Sheng-Jun Huang
Supervised machine learning methods usually require a large set of labeled examples for model training.
no code implementations • 21 Nov 2018 • Chuanxing Geng, Sheng-Jun Huang, Songcan Chen
A more realistic scenario is open set recognition (OSR), where incomplete knowledge of the world exists at training time, and unknown classes can be submitted to an algorithm during testing, requiring the classifiers to not only accurately classify the seen classes, but also effectively deal with the unseen ones.
no code implementations • 15 Feb 2018 • Sheng-Jun Huang, Jia-Wei Zhao, Zhao-Yang Liu
Deep convolutional neural networks have achieved great success in various applications.
no code implementations • 15 Feb 2018 • Sheng-Jun Huang, Miao Xu, Ming-Kun Xie, Masashi Sugiyama, Gang Niu, Songcan Chen
Feature missing is a serious problem in many applications, which may lead to low quality of training data and further significantly degrade the learning performance.
no code implementations • 20 Dec 2016 • Muhammad Yousefnezhad, Sheng-Jun Huang, Daoqiang Zhang
We employ four conditions in the WOC theory, i. e., diversity, independency, decentralization and aggregation, to guide both the constructing of individual clustering results and the final combination for clustering ensemble.
no code implementations • 8 Oct 2013 • Sheng-Jun Huang, Zhi-Hua Zhou
Although the MIML problem is complicated, MIMLfast is able to achieve excellent performance by exploiting label relations with shared space and discovering sub-concepts for complicated labels.
no code implementations • NeurIPS 2010 • Sheng-Jun Huang, Rong Jin, Zhi-Hua Zhou
Most active learning approaches select either informative or representative unlabeled instances to query their labels.