1 code implementation • COLING 2022 • Zezhong Xu, Peng Ye, Hui Chen, Meng Zhao, Huajun Chen, Wen Zhang
Based on this idea, we propose a transformer-based rule mining approach, Ruleformer.
1 code implementation • 23 Mar 2024 • Hancheng Ye, Chong Yu, Peng Ye, Renqiu Xia, Yansong Tang, Jiwen Lu, Tao Chen, Bo Zhang
Recent Vision Transformer Compression (VTC) works mainly follow a two-stage scheme, where the importance score of each model unit is first evaluated or preset in each submodule, followed by the sparsity score evaluation according to the target sparsity constraint.
no code implementations • 19 Mar 2024 • Zezhong Xu, Peng Ye, Lei Liang, Huajun Chen, Wen Zhang
Answering logical queries on knowledge graphs (KG) poses a significant challenge for machine reasoning.
no code implementations • 11 Mar 2024 • Shengji Tang, Weihao Lin, Hancheng Ye, Peng Ye, Chong Yu, Baopu Li, Tao Chen
To alleviate this issue, we first study and reveal the relative sparsity effect in emerging stimulative training and then propose a structured pruning framework, named STP, based on an enhanced sparsification paradigm which maintains the magnitude of dropped weights and enhances the expressivity of kept weights by self-distillation.
1 code implementation • 5 Mar 2024 • JianJian Cao, Peng Ye, Shengze Li, Chong Yu, Yansong Tang, Jiwen Lu, Tao Chen
To this end, we propose a novel framework named Multimodal Alignment-Guided Dynamic Token Pruning (MADTP) for accelerating various VLTs.
no code implementations • 6 Feb 2024 • Junchao Gong, Lei Bai, Peng Ye, Wanghan Xu, Na Liu, Jianhua Dai, Xiaokang Yang, Wanli Ouyang
Precipitation nowcasting based on radar data plays a crucial role in extreme weather prediction and has broad implications for disaster management.
1 code implementation • 23 Jan 2024 • Shengze Li, JianJian Cao, Peng Ye, Yuhan Ding, Chongjun Tu, Tao Chen
Recently, foundational models such as CLIP and SAM have shown promising performance for the task of Zero-Shot Anomaly Segmentation (ZSAS).
no code implementations • 25 Dec 2023 • Peng Ye, Yongqi Huang, Chongjun Tu, Minglei Li, Tao Chen, Tong He, Wanli Ouyang
We first validate eight manually-defined partial fine-tuning strategies across kinds of datasets and vision transformer architectures, and find that some partial fine-tuning strategies (e. g., ffn only or attention only) can achieve better performance with fewer tuned parameters than full fine-tuning, and selecting appropriate layers is critical to partial fine-tuning.
no code implementations • 25 Dec 2023 • Peng Ye, Chenyu Huang, Mingzhu Shen, Tao Chen, Yongqi Huang, Yuning Zhang, Wanli Ouyang
This work targets to merge various Vision Transformers (ViTs) trained on different tasks (i. e., datasets with different object categories) or domains (i. e., datasets with the same categories but different environments) into one unified model, yielding still good performance on each task or domain.
no code implementations • 22 Dec 2023 • Ruiyuan Ming, Peng Ye, Kuojun Yang, Zhixiang Pan, Li Chen, Xuetao Liu
In the meantime, the decimation factor of the CIC filter can be adjusted flexibly in a wide range, which is used to improve the system configuration flexibility.
no code implementations • 21 Dec 2023 • Jingdong Zhang, Jiayuan Fan, Peng Ye, Bo Zhang, Hancheng Ye, Baopu Li, Yancheng Cai, Tao Chen
In this work, we propose to learn a comprehensive intermediate feature globally from both task-generic and task-specific features, we reveal an important fact that this intermediate feature, namely the bridge feature, is a good solution to the above issues.
no code implementations • 21 Dec 2023 • Chongjun Tu, Peng Ye, Weihao Lin, Hancheng Ye, Chong Yu, Tao Chen, Baopu Li, Wanli Ouyang
Improving the efficiency of Neural Architecture Search (NAS) is a challenging but significant task that has received much attention.
no code implementations • 18 Dec 2023 • Kun Chen, Lei Bai, Fenghua Ling, Peng Ye, Tao Chen, Jing-Jia Luo, Hao Chen, Yi Xiao, Kang Chen, Tao Han, Wanli Ouyang
Initial states are typically generated by traditional data assimilation components, which are computational expensive and time-consuming.
no code implementations • 11 Oct 2023 • Chaoqi Liang, Weiqiang Bai, Lifeng Qiao, Yuchen Ren, Jianle Sun, Peng Ye, Hongliang Yan, Xinzhu Ma, WangMeng Zuo, Wanli Ouyang
To address this research gap, we first conducted a series of exploratory experiments and gained several insightful observations: 1) In the fine-tuning phase of downstream tasks, when using K-mer overlapping tokenization instead of K-mer non-overlapping tokenization, both overlapping and non-overlapping pretraining weights show consistent performance improvement. 2) During the pre-training process, using K-mer overlapping tokenization quickly produces clear K-mer embeddings and reduces the loss to a very low level, while using K-mer non-overlapping tokenization results in less distinct embeddings and continuously decreases the loss.
1 code implementation • 20 Sep 2023 • Renqiu Xia, Bo Zhang, Haoyang Peng, Hancheng Ye, Xiangchao Yan, Peng Ye, Botian Shi, Yu Qiao, Junchi Yan
Charts are common in literature across different scientific fields, conveying rich information easily accessible to readers.
Ranked #19 on Chart Question Answering on ChartQA (using extra training data)
1 code implementation • 26 Aug 2023 • Shengji Tang, Peng Ye, Baopu Li, Weihao Lin, Tao Chen, Tong He, Chong Yu, Wanli Ouyang
Specifically, we implicitly divide all subnets into hierarchical groups by subnet-in-subnet sampling, aggregate the knowledge of different subnets in each group during training, and exploit upper-level group knowledge to supervise lower-level subnet groups.
no code implementations • 18 Aug 2023 • Ruiyuan Ming, Peng Ye, Kuojun Yang, Zhixiang Pan, Chenyang Li, Chuang Huang
Real-time frequency measurement for non-repetitive and statistically rare signals are challenging problems in the electronic measurement area, which places high demands on the bandwidth, sampling rate, data processing and transmission capabilities of the measurement system.
1 code implementation • 17 Aug 2023 • Mengyao Li, Liquan Shen, Peng Ye, Guorui Feng, Zheyin Wang
Subsequently, an extreme UWI compression network with reference to the feature dictionary (RFD-ECNet) is creatively proposed, which utilizes feature match and reference feature variant to significantly remove redundancy among UWIs.
no code implementations • 11 Aug 2023 • Yongqi Huang, Peng Ye, Xiaoshui Huang, Sheng Li, Tao Chen, Tong He, Wanli Ouyang
As Vision Transformers (ViTs) are gradually surpassing CNNs in various visual tasks, one may question: if a training scheme specifically for ViTs exists that can also achieve performance improvement without increasing inference cost?
no code implementations • 15 Jun 2023 • Jingyi Zhou, Jiamu Sheng, Jiayuan Fan, Peng Ye, Tong He, Bin Wang, Tao Chen
Learning effective spectral-spatial features is important for the hyperspectral image (HSI) classification task, but the majority of existing HSI classification methods still suffer from modeling complex spectral-spatial relations and characterizing low-level details and high-level semantics comprehensively.
no code implementations • 4 May 2023 • Peng Ye, Tong He, Shengji Tang, Baopu Li, Tao Chen, Lei Bai, Wanli Ouyang
In this work, we aim to re-investigate the training process of residual networks from a novel social psychology perspective of loafing, and further propose a new training scheme as well as three improved strategies for boosting residual networks beyond their performance limits.
no code implementations • 23 Feb 2023 • Lin Zhan, Jiayuan Fan, Peng Ye, JianJian Cao
To address the above issues, we propose a multi-stage search architecture in order to overcome asymmetric spectral-spatial dimensions and capture significant features.
Hyperspectral Image Classification Neural Architecture Search
no code implementations • 20 Feb 2023 • Jiamu Sheng, Jiayuan Fan, Peng Ye, JianJian Cao
Despite substantial progress in no-reference image quality assessment (NR-IQA), previous training models often suffer from over-fitting due to the limited scale of used datasets, resulting in model performance bottlenecks.
1 code implementation • 16 Jan 2023 • Peng Ye, Tong He, Baopu Li, Tao Chen, Lei Bai, Wanli Ouyang
To address the robustness problem, we first benchmark different NAS methods under a wide range of proxy data, proxy channels, proxy layers and proxy epochs, since the robustness of NAS under different kinds of proxies has not been explored before.
1 code implementation • ICCV 2023 • Mengyao Li, Liquan Shen, Peng Ye, Guorui Feng, Zheyin Wang
Subsequently, an extreme UWI compression network with reference to the feature dictionary (RFD-ECNet) is creatively proposed, which utilizes feature match and reference feature variant to significantly remove redundancy among UWIs.
no code implementations • 13 Oct 2022 • Peng Ye, Zhifeng Jiang, Wei Wang, Bo Li, Baochun Li
To address this problem, we develop a novel feature protection scheme against the reconstruction attack that effectively misleads the search to some pre-specified random values.
1 code implementation • 9 Oct 2022 • Peng Ye, Shengji Tang, Baopu Li, Tao Chen, Wanli Ouyang
In this work, we aim to re-investigate the training process of residual networks from a novel social psychology perspective of loafing, and further propose a new training strategy to strengthen the performance of residual networks.
no code implementations • 19 Sep 2022 • Zezhong Xu, Wen Zhang, Peng Ye, Hui Chen, Huajun Chen
In this work, we propose a Neural and Symbolic Entangled framework (ENeSy) for complex query answering, which enables the neural and symbolic reasoning to enhance each other to alleviate the cascading error and KG incompleteness.
no code implementations • 10 Aug 2022 • Peng Ye, Baopu Li, Tao Chen, Jiayuan Fan, Zhen Mei, Chen Lin, Chongyan Zuo, Qinghua Chi, Wanli Ouyan
In this paper, we intend to search an optimal network structure that can run in real-time for this problem.
1 code implementation • 3 Mar 2022 • Peng Ye, Baopu Li, Yikang Li, Tao Chen, Jiayuan Fan, Wanli Ouyang
Neural Architecture Search~(NAS) has attracted increasingly more attention in recent years because of its capability to design deep neural networks automatically.
1 code implementation • 30 Jan 2022 • Bicheng Guo, Tao Chen, Shibo He, Haoyu Liu, Lilin Xu, Peng Ye, Jiming Chen
The NAR explores the quality tiers of the search space globally and classifies each individual to the tier they belong to according to its global ranking.
1 code implementation • CVPR 2022 • Peng Ye, Baopu Li, Yikang Li, Tao Chen, Jiayuan Fan, Wanli Ouyang
Neural Architecture Search (NAS) has attracted increasingly more attention in recent years because of its capability to design deep neural network automatically.
no code implementations • 3 Jul 2021 • Cyrus Rashtchian, David P. Woodruff, Peng Ye, Hanlin Zhu
Our motivation is to understand the statistical-computational trade-offs in streaming, sketching, and query-based models.
no code implementations • 26 Dec 2020 • Zhi-Feng Zhang, Peng Ye
In this paper, we provide a field-theoretical approach towards a complete list of mutually compatible braiding phases of topological orders in (3+1)D spacetime.
Strongly Correlated Electrons High Energy Physics - Theory Mathematical Physics Mathematical Physics
no code implementations • WS 2012 • Michael Bloodgood, Peng Ye, Paul Rodrigues, David Zajic, David Doermann
We investigate combining methods and show that using random forests is a promising approach.
no code implementations • 29 Oct 2014 • Paul Rodrigues, David Zajic, David Doermann, Michael Bloodgood, Peng Ye
Dictionaries are often developed using tools that save to Extensible Markup Language (XML)-based standards.
no code implementations • CVPR 2014 • Le Kang, Peng Ye, Yi Li, David Doermann
In this work we describe a Convolutional Neural Network (CNN) to accurately predict image quality without a reference image.
no code implementations • CVPR 2014 • Peng Ye, Jayant Kumar, David Doermann
Instead of training on human opinion scores, we propose to train BIQA models on synthetic scores derived from Full-Reference (FR) IQA measures.
no code implementations • CVPR 2014 • Peng Ye, David Doermann
Subjective tests based on the Mean Opinion Score (MOS) have been widely used in previous studies, but have many known problems such as an ambiguous scale definition and dissimilar interpretations of the scale among subjects.
no code implementations • CVPR 2013 • Peng Ye, Jayant Kumar, Le Kang, David Doermann
Second, the proposed method has the potential to be used in multiple image domains.