no code implementations • 19 Oct 2023 • Dayang Wang, Yongshun Xu, Shuo Han, Zhan Wu, Li Zhou, Bahareh Morovati, Hengyong Yu
Recently, transformer models emerged as a promising avenue to enhance LDCT image quality.
1 code implementation • 15 Aug 2023 • Juntong Fan, Debesh Jha, Tieyong Zeng, Dayang Wang
Polyps segmentation poses a significant challenge in medical imaging due to the flat surface of polyps and their texture similarity to surrounding tissues.
no code implementations • 22 Jul 2023 • Dayang Wang, Srivathsa Pasumarthi, Greg Zaharchuk, Ryan Chamberlain
In this work, we formulate a novel transformer (Gformer) based iterative modelling approach for the synthesis of images with arbitrary contrast enhancement that corresponds to different dose levels.
no code implementations • 10 Oct 2022 • Dayang Wang, Yongshun Xu, Shuo Han, Hengyong Yu
A plethora of transformer models have been developed recently to improve LDCT image quality.
no code implementations • 16 Sep 2022 • Dayang Wang, Boce Zhang, Yongshun Xu, Yaguang Luo, Hengyong Yu
To the best of our knowledge, it is the first-of-its-kind on deep learning for lettuce browning prediction using a pretrained Siamese Quadratic Swin (SQ-Swin) transformer with several highlights.
2 code implementations • 28 Feb 2022 • Dayang Wang, Fenglei Fan, Zhan Wu, Rui Liu, Fei Wang, Hengyong Yu
Furthermore, an overlapped inference mechanism is introduced to effectively eliminate the boundary artifacts that are common for encoder-decoder-based denoising models.
2 code implementations • 14 Jan 2022 • Dayang Wang, Feng-Lei Fan, Bo-Jian Hou, Hao Zhang, Zhen Jia, Boce Zhou, Rongjie Lai, Hengyong Yu, Fei Wang
A neural network with the widely-used ReLU activation has been shown to partition the sample space into many convex polytopes for prediction.
2 code implementations • 8 Jun 2021 • Dayang Wang, Zhan Wu, Hengyong Yu
The model is free of convolution blocks and consists of a symmetric encoder-decoder block with sole transformer.
1 code implementation • 22 Nov 2018 • Fenglei Fan, Dayang Wang, Hengtao Guo, Qikui Zhu, Pingkun Yan, Ge Wang, Hengyong Yu
In this paper, we investigate the expressivity and generalizability of a novel sparse shortcut topology.