Search Results for author: Semin Kim

Found 3 papers, 1 papers with code

Chameleon: A Data-Efficient Generalist for Dense Visual Prediction in the Wild

no code implementations29 Apr 2024 Donggyun Kim, Seongwoong Cho, Semin Kim, Chong Luo, Seunghoon Hong

In this study, we explore a universal model that can flexibly adapt to unseen dense label structures with a few examples, enabling it to serve as a data-efficient vision generalist in diverse real-world scenarios.

Meta-Learning

Utilizing Neural Transducers for Two-Stage Text-to-Speech via Semantic Token Prediction

no code implementations3 Jan 2024 Minchan Kim, Myeonghun Jeong, Byoung Jin Choi, Semin Kim, Joun Yeop Lee, Nam Soo Kim

We also delve into the inference speed and prosody control capabilities of our approach, highlighting the potential of neural transducers in TTS frameworks.

Towards End-to-End Generative Modeling of Long Videos with Memory-Efficient Bidirectional Transformers

1 code implementation CVPR 2023 Jaehoon Yoo, Semin Kim, Doyup Lee, Chiheon Kim, Seunghoon Hong

However, the transformers are prohibited from directly learning the long-term dependency in videos due to the quadratic complexity of self-attention, and inherently suffering from slow inference time and error propagation due to the autoregressive process.

Video Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.