no code implementations • 8 May 2024 • Renjie Liu, Yichuan Wang, Xiao Yan, Zhenkun Cai, Minjie Wang, Haitian Jiang, Bo Tang, Jinyang Li
In particular, by conducting graph sampling beforehand, DiskGNN acquires the node features that will be accessed by model computation, and such information is utilized to pack the target node features contiguously on disk to avoid read amplification.
no code implementations • 19 Oct 2023 • Haitian Jiang, Renjie Liu, Xiao Yan, Zhenkun Cai, Minjie Wang, David Wipf
Among the many variants of graph neural network (GNN) architectures capable of modeling data with cross-instance relations, an important subclass involves layers designed such that the forward pass iteratively reduces a graph-regularized energy function of interest.
1 code implementation • NeurIPS 2023 • Qitian Wu, Wentao Zhao, Chenxiao Yang, Hengrui Zhang, Fan Nie, Haitian Jiang, Yatao Bian, Junchi Yan
Learning representations on large-sized graphs is a long-standing challenge due to the inter-dependence nature involved in massive data points.
no code implementations • 24 Apr 2023 • Haitian Jiang, Dongliang Xiong, Xiaowen Jiang, Li Ding, Liang Chen, Kai Huang
In this paper, we propose a fast and structure-aware halftoning method via a data-driven approach.
no code implementations • 18 Jan 2023 • Kezhao Huang, Haitian Jiang, Minjie Wang, Guangxuan Xiao, David Wipf, Xiang Song, Quan Gan, Zengfeng Huang, Jidong Zhai, Zheng Zhang
A key performance bottleneck when training graph neural network (GNN) models on large, real-world graphs is loading node features onto a GPU.
no code implementations • ICCV 2023 • Liyuan Ma, Tingwei Gao, Haitian Jiang, Haibin Shen, Kejie Huang
To leverage the advantages of both attention and flow simultaneously, we propose Wavelet-aware Image-based Pose Transfer (WaveIPT) to fuse the attention and flow in the wavelet domain.
no code implementations • 23 Jul 2022 • Haitian Jiang, Dongliang Xiong, Xiaowen Jiang, Aiguo Yin, Li Ding, Kai Huang
Deep neural networks have recently succeeded in digital halftoning using vanilla convolutional layers with high parallelism.