Search Results for author: Muhammad Adi Nugroho

Found 6 papers, 1 papers with code

Flow-Assisted Motion Learning Network for Weakly-Supervised Group Activity Recognition

no code implementations28 May 2024 Muhammad Adi Nugroho, Sangmin Woo, Sumin Lee, Jinyoung Park, Yooseung Wang, Donguk Kim, Changick Kim

The first pathway of the relation module, the actor-centric path, initially captures the temporal dynamics of individual actors and then constructs inter-actor relationships.

Modality Mixer Exploiting Complementary Information for Multi-modal Action Recognition

no code implementations21 Nov 2023 Sumin Lee, Sangmin Woo, Muhammad Adi Nugroho, Changick Kim

CFEM incorporates sepearte learnable query embeddings for each modality, which guide CFEM to extract complementary information and global action content from the other modalities.

Action Recognition

Audio-Visual Glance Network for Efficient Video Recognition

no code implementations ICCV 2023 Muhammad Adi Nugroho, Sangmin Woo, Sumin Lee, Changick Kim

To address this issue, we propose Audio-Visual Glance Network (AVGN), which leverages the commonly available audio and visual modalities to efficiently process the spatio-temporally important parts of a video.

Video Recognition Video Understanding

Modality Mixer for Multi-modal Action Recognition

no code implementations24 Aug 2022 Sumin Lee, Sangmin Woo, Yeonju Park, Muhammad Adi Nugroho, Changick Kim

In multi-modal action recognition, it is important to consider not only the complementary nature of different modalities but also global action content.

Action Recognition

Test-time Adaptation for Real Image Denoising via Meta-transfer Learning

no code implementations5 Jul 2022 Agus Gunawan, Muhammad Adi Nugroho, Se Jin Park

We explore a different direction where we propose to improve real image denoising performance through a better learning strategy that can enable test-time adaptation on the multi-task network.

Auxiliary Learning Image Denoising +2

Cannot find the paper you are looking for? You can Submit a new open access paper.