no code implementations • 7 Dec 2023 • Jongmin Yu, Hyeontaek Oh, Jinhong Yang
With the addition of explicit adversarial learning on data samples, ADDM can learn the semantic characteristics of the data more robustly during training, which achieves a similar data sampling performance with much fewer sampling steps than DDPM.
1 code implementation • 28 Oct 2021 • Jongmin Yu, Hyeontaek Oh, Minkyung Kim, Junsik Kim
In this paper, we propose Normality-Calibrated Autoencoder (NCAE), which can boost anomaly detection performance on the contaminated datasets without any prior information or explicit abnormal samples in the training phase.
1 code implementation • 14 Sep 2021 • Jongmin Yu, Junsik Kim, Minkyung Kim, Hyeontaek Oh
However, this achievement requires large-scale and well-annotated datasets.
1 code implementation • 16 Jun 2021 • Jongmin Yu, Hyeontaek Oh
The proposed GSMLP and SMLC boost the performance of unsupervised person Re-ID without any pre-labelled dataset.
1 code implementation • 3 Mar 2021 • Jongmin Yu, Hyeontaek Oh
The results of DPLM are applied to dictionary-based triplet loss (DTL) to improve the discriminativeness of learnt features and to refine the quality of the results of DPLM progressively.
no code implementations • 20 Oct 2019 • Jongmin Yu, Hyeontaek Oh
To this end, we propose an evaluation metric for weight separability based on semi-orthogonality of a matrix and Frobenius distance, and the feed-backward reconstruction loss which explicitly encourages weight separability between the column vectors in the weight matrix.