1 code implementation • 2 Apr 2024 • Minhyuk Seo, Hyunseo Koh, Wonje Jeung, Minjae Lee, San Kim, Hankook Lee, Sungjun Cho, Sungik Choi, Hyunwoo Kim, Jonghyun Choi
Online continual learning suffers from an underfitted solution due to insufficient training for prompt model update (e. g., single-epoch training).
no code implementations • 8 Sep 2023 • Sungjun Cho, Seunghyuk Cho, Sungwoo Park, Hankook Lee, Honglak Lee, Moontae Lee
Real-world graphs naturally exhibit hierarchical or cyclical structures that are unfit for the typical Euclidean space.
no code implementations • 8 Sep 2023 • Sungjun Cho, Dae-Woong Jeong, Sung Moon Ko, Jinwoo Kim, Sehui Han, Seunghoon Hong, Honglak Lee, Moontae Lee
Pretraining molecular representations from large unlabeled data is essential for molecular property prediction due to the high cost of obtaining ground-truth labels.
no code implementations • 2nd Annual Topology, Algebra, and Geometry in Machine Learning Workshop 2023 • Sungjun Cho, Seunghyuk Cho, Sungwoo Park, Hankook Lee, Honglak Lee, Moontae Lee
Real-world graphs naturally exhibit hierarchical or cyclical structures that are unfit for the typical Euclidean space.
no code implementations • 27 Jan 2023 • Sungmin Cha, Sungjun Cho, Dasol Hwang, Honglak Lee, Taesup Moon, Moontae Lee
Since the recent advent of regulations for data protection (e. g., the General Data Protection Regulation), there has been increasing demand in deleting information learned from sensitive data in pre-trained models without retraining from scratch.
1 code implementation • 27 Oct 2022 • Sungjun Cho, Seonwoo Min, Jinwoo Kim, Moontae Lee, Honglak Lee, Seunghoon Hong
The forward and backward cost are thus linear to the number of edges, which each attention head can also choose flexibly based on the input.
no code implementations • 26 Sep 2022 • Hyunjae Lee, Gihyeon Lee, Junhwan Kim, Sungjun Cho, Dohyun Kim, Donggeun Yoo
However, it often results in selecting a sub-optimal configuration as training with the high-performing configuration typically converges slowly in an early phase.
no code implementations • 7 Sep 2022 • Sung Moon Ko, Sungjun Cho, Dae-Woong Jeong, Sehui Han, Moontae Lee, Honglak Lee
Conventional methods ask users to specify an appropriate number of clusters as a hyperparameter, then assume that all input graphs share the same number of clusters.
1 code implementation • 22 Aug 2022 • Jinwoo Kim, Saeyoon Oh, Sungjun Cho, Seunghoon Hong
Many problems in computer vision and machine learning can be cast as learning on hypergraphs that represent higher-order relations.
1 code implementation • 6 Jul 2022 • Jinwoo Kim, Tien Dat Nguyen, Seonwoo Min, Sungjun Cho, Moontae Lee, Honglak Lee, Seunghoon Hong
We show that standard Transformers without graph-specific modifications can lead to promising results in graph learning both in theory and practice.
Ranked #15 on Graph Regression on PCQM4Mv2-LSC
no code implementations • CVPR 2023 • Sungmin Cha, Sungjun Cho, Dasol Hwang, Sunwon Hong, Moontae Lee, Taesup Moon
The main reason for the ineffectiveness of their method lies in not fully addressing the data imbalance issue, especially in computing the gradients for learning the affine transformation parameters of BN.
no code implementations • 12 Nov 2021 • Moontae Lee, Sungjun Cho, Kun Dong, David Mimno, David Bindel
Across many data domains, co-occurrence statistics about the joint appearance of objects are powerfully informative.
no code implementations • IJCNLP 2019 • Moontae Lee, Sungjun Cho, David Bindel, David Mimno
Despite great scalability on large data and their ability to understand correlations between topics, spectral topic models have not been widely used due to the absence of reliability in real data and lack of practical implementations.