1 code implementation • NAACL (ACL) 2022 • Seung Byum Seo, Hyoungwook Nam, Payam Delgosha
While there have been advances in Natural Language Processing (NLP), their success is mainly gained by applying a self-attention mechanism into single or multi-modalities.
no code implementations • 18 Feb 2023 • Hyoungwook Nam, Seung Byum Seo
We propose a novel perspective of the attention mechanism by reinventing it as a memory architecture for neural networks, namely Neural Attention Memory (NAM).
no code implementations • 3 Feb 2023 • Hyoungwook Nam, Raghavendra Pradyumna Pothukuchi, Bo Li, Nam Sung Kim, Josep Torrellas
To address this problem, this paper explores using Adversarial Machine Learning (AML) methods as a defense at the computer architecture layer to obfuscate side channels.
1 code implementation • 13 Jan 2021 • Segwang Kim, Hyoungwook Nam, Joonyoung Kim, Kyomin Jung
Logical reasoning tasks over symbols, such as learning arithmetic operations and computer program evaluations, have become challenges to deep learning.
1 code implementation • 18 Jun 2020 • Hyoungwook Nam, Seung Byum Seo, Vikram Sharma Mailthody, Noor Michael, Lan Li
The model inductively generalizes on a variety of algorithmic tasks where state-of-the-art Transformer models fail to do so.
no code implementations • 19 May 2018 • Hyoungwook Nam, Segwang Kim, Kyomin Jung
We define the complexity and difficulty of a number sequence prediction task with the structure of the smallest automaton that can generate the sequence.