Search Results for author: Peng Ye

Found 40 papers, 13 papers with code

Once for Both: Single Stage of Importance and Sparsity Search for Vision Transformer Compression

1 code implementation23 Mar 2024 Hancheng Ye, Chong Yu, Peng Ye, Renqiu Xia, Yansong Tang, Jiwen Lu, Tao Chen, Bo Zhang

Recent Vision Transformer Compression (VTC) works mainly follow a two-stage scheme, where the importance score of each model unit is first evaluated or preset in each submodule, followed by the sparsity score evaluation according to the target sparsity constraint.

Dimensionality Reduction

Prompt-fused framework for Inductive Logical Query Answering

no code implementations19 Mar 2024 Zezhong Xu, Peng Ye, Lei Liang, Huajun Chen, Wen Zhang

Answering logical queries on knowledge graphs (KG) poses a significant challenge for machine reasoning.

Knowledge Graphs

Enhanced Sparsification via Stimulative Training

no code implementations11 Mar 2024 Shengji Tang, Weihao Lin, Hancheng Ye, Peng Ye, Chong Yu, Baopu Li, Tao Chen

To alleviate this issue, we first study and reveal the relative sparsity effect in emerging stimulative training and then propose a structured pruning framework, named STP, based on an enhanced sparsification paradigm which maintains the magnitude of dropped weights and enhances the expressivity of kept weights by self-distillation.

Knowledge Distillation Model Compression

MADTP: Multimodal Alignment-Guided Dynamic Token Pruning for Accelerating Vision-Language Transformer

1 code implementation5 Mar 2024 JianJian Cao, Peng Ye, Shengze Li, Chong Yu, Yansong Tang, Jiwen Lu, Tao Chen

To this end, we propose a novel framework named Multimodal Alignment-Guided Dynamic Token Pruning (MADTP) for accelerating various VLTs.

CasCast: Skillful High-resolution Precipitation Nowcasting via Cascaded Modelling

no code implementations6 Feb 2024 Junchao Gong, Lei Bai, Peng Ye, Wanghan Xu, Na Liu, Jianhua Dai, Xiaokang Yang, Wanli Ouyang

Precipitation nowcasting based on radar data plays a crucial role in extreme weather prediction and has broad implications for disaster management.

Management

ClipSAM: CLIP and SAM Collaboration for Zero-Shot Anomaly Segmentation

1 code implementation23 Jan 2024 Shengze Li, JianJian Cao, Peng Ye, Yuhan Ding, Chongjun Tu, Tao Chen

Recently, foundational models such as CLIP and SAM have shown promising performance for the task of Zero-Shot Anomaly Segmentation (ZSAS).

Segmentation

Partial Fine-Tuning: A Successor to Full Fine-Tuning for Vision Transformers

no code implementations25 Dec 2023 Peng Ye, Yongqi Huang, Chongjun Tu, Minglei Li, Tao Chen, Tong He, Wanli Ouyang

We first validate eight manually-defined partial fine-tuning strategies across kinds of datasets and vision transformer architectures, and find that some partial fine-tuning strategies (e. g., ffn only or attention only) can achieve better performance with fewer tuned parameters than full fine-tuning, and selecting appropriate layers is critical to partial fine-tuning.

Merging Vision Transformers from Different Tasks and Domains

no code implementations25 Dec 2023 Peng Ye, Chenyu Huang, Mingzhu Shen, Tao Chen, Yongqi Huang, Yuning Zhang, Wanli Ouyang

This work targets to merge various Vision Transformers (ViTs) trained on different tasks (i. e., datasets with different object categories) or domains (i. e., datasets with the same categories but different environments) into one unified model, yielding still good performance on each task or domain.

Wideband Sample Rate Converter Using Cascaded Parallel-serial Structure for Synthetic Instrumentation

no code implementations22 Dec 2023 Ruiyuan Ming, Peng Ye, Kuojun Yang, Zhixiang Pan, Li Chen, Xuetao Liu

In the meantime, the decimation factor of the CIC filter can be adjusted flexibly in a wide range, which is used to improve the system configuration flexibility.

Rethinking of Feature Interaction for Multi-task Learning on Dense Prediction

no code implementations21 Dec 2023 Jingdong Zhang, Jiayuan Fan, Peng Ye, Bo Zhang, Hancheng Ye, Baopu Li, Yancheng Cai, Tao Chen

In this work, we propose to learn a comprehensive intermediate feature globally from both task-generic and task-specific features, we reveal an important fact that this intermediate feature, namely the bridge feature, is a good solution to the above issues.

Decoder Multi-Task Learning

Efficient Architecture Search via Bi-level Data Pruning

no code implementations21 Dec 2023 Chongjun Tu, Peng Ye, Weihao Lin, Hancheng Ye, Chong Yu, Tao Chen, Baopu Li, Wanli Ouyang

Improving the efficiency of Neural Architecture Search (NAS) is a challenging but significant task that has received much attention.

Neural Architecture Search

Towards an end-to-end artificial intelligence driven global weather forecasting system

no code implementations18 Dec 2023 Kun Chen, Lei Bai, Fenghua Ling, Peng Ye, Tao Chen, Jing-Jia Luo, Hao Chen, Yi Xiao, Kang Chen, Tao Han, Wanli Ouyang

Initial states are typically generated by traditional data assimilation components, which are computational expensive and time-consuming.

Weather Forecasting

Rethinking the BERT-like Pretraining for DNA Sequences

no code implementations11 Oct 2023 Chaoqi Liang, Weiqiang Bai, Lifeng Qiao, Yuchen Ren, Jianle Sun, Peng Ye, Hongliang Yan, Xinzhu Ma, WangMeng Zuo, Wanli Ouyang

To address this research gap, we first conducted a series of exploratory experiments and gained several insightful observations: 1) In the fine-tuning phase of downstream tasks, when using K-mer overlapping tokenization instead of K-mer non-overlapping tokenization, both overlapping and non-overlapping pretraining weights show consistent performance improvement. 2) During the pre-training process, using K-mer overlapping tokenization quickly produces clear K-mer embeddings and reduces the loss to a very low level, while using K-mer non-overlapping tokenization results in less distinct embeddings and continuously decreases the loss.

StructChart: Perception, Structuring, Reasoning for Visual Chart Understanding

1 code implementation20 Sep 2023 Renqiu Xia, Bo Zhang, Haoyang Peng, Hancheng Ye, Xiangchao Yan, Peng Ye, Botian Shi, Yu Qiao, Junchi Yan

Charts are common in literature across different scientific fields, conveying rich information easily accessible to readers.

Ranked #19 on Chart Question Answering on ChartQA (using extra training data)

Chart Question Answering Language Modelling +2

Boosting Residual Networks with Group Knowledge

1 code implementation26 Aug 2023 Shengji Tang, Peng Ye, Baopu Li, Weihao Lin, Tao Chen, Tong He, Chong Yu, Wanli Ouyang

Specifically, we implicitly divide all subnets into hierarchical groups by subnet-in-subnet sampling, aggregate the knowledge of different subnets in each group during training, and exploit upper-level group knowledge to supervise lower-level subnet groups.

Knowledge Distillation

Real-time frequency measurement based on parallel pipeline FFT for time-stretched acquisition system

no code implementations18 Aug 2023 Ruiyuan Ming, Peng Ye, Kuojun Yang, Zhixiang Pan, Chenyang Li, Chuang Huang

Real-time frequency measurement for non-repetitive and statistically rare signals are challenging problems in the electronic measurement area, which places high demands on the bandwidth, sampling rate, data processing and transmission capabilities of the measurement system.

RFD-ECNet: Extreme Underwater Image Compression with Reference to Feature Dictionar

1 code implementation17 Aug 2023 Mengyao Li, Liquan Shen, Peng Ye, Guorui Feng, Zheyin Wang

Subsequently, an extreme UWI compression network with reference to the feature dictionary (RFD-ECNet) is creatively proposed, which utilizes feature match and reference feature variant to significantly remove redundancy among UWIs.

Image Compression MORPH

Experts Weights Averaging: A New General Training Scheme for Vision Transformers

no code implementations11 Aug 2023 Yongqi Huang, Peng Ye, Xiaoshui Huang, Sheng Li, Tao Chen, Tong He, Wanli Ouyang

As Vision Transformers (ViTs) are gradually surpassing CNNs in various visual tasks, one may question: if a training scheme specifically for ViTs exists that can also achieve performance improvement without increasing inference cost?

When Hyperspectral Image Classification Meets Diffusion Models: An Unsupervised Feature Learning Framework

no code implementations15 Jun 2023 Jingyi Zhou, Jiamu Sheng, Jiayuan Fan, Peng Ye, Tong He, Bin Wang, Tao Chen

Learning effective spectral-spatial features is important for the hyperspectral image (HSI) classification task, but the majority of existing HSI classification methods still suffer from modeling complex spectral-spatial relations and characterizing low-level details and high-level semantics comprehensively.

Classification Hyperspectral Image Classification

Stimulative Training++: Go Beyond The Performance Limits of Residual Networks

no code implementations4 May 2023 Peng Ye, Tong He, Shengji Tang, Baopu Li, Tao Chen, Lei Bai, Wanli Ouyang

In this work, we aim to re-investigate the training process of residual networks from a novel social psychology perspective of loafing, and further propose a new training scheme as well as three improved strategies for boosting residual networks beyond their performance limits.

A2S-NAS: Asymmetric Spectral-Spatial Neural Architecture Search For Hyperspectral Image Classification

no code implementations23 Feb 2023 Lin Zhan, Jiayuan Fan, Peng Ye, JianJian Cao

To address the above issues, we propose a multi-stage search architecture in order to overcome asymmetric spectral-spatial dimensions and capture significant features.

Hyperspectral Image Classification Neural Architecture Search

JNDMix: JND-Based Data Augmentation for No-reference Image Quality Assessment

no code implementations20 Feb 2023 Jiamu Sheng, Jiayuan Fan, Peng Ye, JianJian Cao

Despite substantial progress in no-reference image quality assessment (NR-IQA), previous training models often suffer from over-fitting due to the limited scale of used datasets, resulting in model performance bottlenecks.

Data Augmentation No-Reference Image Quality Assessment +1

$β$-DARTS++: Bi-level Regularization for Proxy-robust Differentiable Architecture Search

1 code implementation16 Jan 2023 Peng Ye, Tong He, Baopu Li, Tao Chen, Lei Bai, Wanli Ouyang

To address the robustness problem, we first benchmark different NAS methods under a wide range of proxy data, proxy channels, proxy layers and proxy epochs, since the robustness of NAS under different kinds of proxies has not been explored before.

Neural Architecture Search

RFD-ECNet: Extreme Underwater Image Compression with Reference to Feature Dictionary

1 code implementation ICCV 2023 Mengyao Li, Liquan Shen, Peng Ye, Guorui Feng, Zheyin Wang

Subsequently, an extreme UWI compression network with reference to the feature dictionary (RFD-ECNet) is creatively proposed, which utilizes feature match and reference feature variant to significantly remove redundancy among UWIs.

Image Compression MORPH

Feature Reconstruction Attacks and Countermeasures of DNN training in Vertical Federated Learning

no code implementations13 Oct 2022 Peng Ye, Zhifeng Jiang, Wei Wang, Bo Li, Baochun Li

To address this problem, we develop a novel feature protection scheme against the reconstruction attack that effectively misleads the search to some pre-specified random values.

Reconstruction Attack Vertical Federated Learning

Stimulative Training of Residual Networks: A Social Psychology Perspective of Loafing

1 code implementation9 Oct 2022 Peng Ye, Shengji Tang, Baopu Li, Tao Chen, Wanli Ouyang

In this work, we aim to re-investigate the training process of residual networks from a novel social psychology perspective of loafing, and further propose a new training strategy to strengthen the performance of residual networks.

Neural-Symbolic Entangled Framework for Complex Query Answering

no code implementations19 Sep 2022 Zezhong Xu, Wen Zhang, Peng Ye, Hui Chen, Huajun Chen

In this work, we propose a Neural and Symbolic Entangled framework (ENeSy) for complex query answering, which enables the neural and symbolic reasoning to enhance each other to alleviate the cascading error and KG incompleteness.

Complex Query Answering Link Prediction +1

$β$-DARTS: Beta-Decay Regularization for Differentiable Architecture Search

1 code implementation3 Mar 2022 Peng Ye, Baopu Li, Yikang Li, Tao Chen, Jiayuan Fan, Wanli Ouyang

Neural Architecture Search~(NAS) has attracted increasingly more attention in recent years because of its capability to design deep neural networks automatically.

Neural Architecture Search

b-DARTS: Beta-Decay Regularization for Differentiable Architecture Search

1 code implementation CVPR 2022 Peng Ye, Baopu Li, Yikang Li, Tao Chen, Jiayuan Fan, Wanli Ouyang

Neural Architecture Search (NAS) has attracted increasingly more attention in recent years because of its capability to design deep neural network automatically.

Neural Architecture Search

Average-Case Communication Complexity of Statistical Problems

no code implementations3 Jul 2021 Cyrus Rashtchian, David P. Woodruff, Peng Ye, Hanlin Zhu

Our motivation is to understand the statistical-computational trade-offs in streaming, sketching, and query-based models.

Compatible braidings with Hopf links, multi-loop, and Borromean rings in $(3+1)$-dimensional spacetime

no code implementations26 Dec 2020 Zhi-Feng Zhang, Peng Ye

In this paper, we provide a field-theoretical approach towards a complete list of mutually compatible braiding phases of topological orders in (3+1)D spacetime.

Strongly Correlated Electrons High Energy Physics - Theory Mathematical Physics Mathematical Physics

Detecting Structural Irregularity in Electronic Dictionaries Using Language Modeling

no code implementations29 Oct 2014 Paul Rodrigues, David Zajic, David Doermann, Michael Bloodgood, Peng Ye

Dictionaries are often developed using tools that save to Extensible Markup Language (XML)-based standards.

Language Modelling

Active Sampling for Subjective Image Quality Assessment

no code implementations CVPR 2014 Peng Ye, David Doermann

Subjective tests based on the Mean Opinion Score (MOS) have been widely used in previous studies, but have many known problems such as an ambiguous scale definition and dissimilar interpretations of the scale among subjects.

Image Quality Assessment

Convolutional Neural Networks for No-Reference Image Quality Assessment

no code implementations CVPR 2014 Le Kang, Peng Ye, Yi Li, David Doermann

In this work we describe a Convolutional Neural Network (CNN) to accurately predict image quality without a reference image.

No-Reference Image Quality Assessment

Beyond Human Opinion Scores: Blind Image Quality Assessment based on Synthetic Scores

no code implementations CVPR 2014 Peng Ye, Jayant Kumar, David Doermann

Instead of training on human opinion scores, we propose to train BIQA models on synthetic scores derived from Full-Reference (FR) IQA measures.

Blind Image Quality Assessment

Cannot find the paper you are looking for? You can Submit a new open access paper.