no code implementations • 1 May 2024 • Huy H. Nguyen, Junichi Yamagishi, Isao Echizen
This paper investigates the effectiveness of self-supervised pre-trained transformers compared to supervised pre-trained transformers and conventional neural networks (ConvNets) for detecting various types of deepfakes.
no code implementations • 13 Feb 2024 • AprilPyone MaungMaung, Huy H. Nguyen, Hitoshi Kiya, Isao Echizen
To this end, we utilize an existing approach of personalizing large-scale text-to-image diffusion models with available discovered spurious images and propose a new spurious feature similarity loss based on neural features of an adversarially robust model.
no code implementations • 29 Jan 2024 • Fatma Shalabi, Huy H. Nguyen, Hichem Felouat, Ching-Chun Chang, Isao Echizen
Misinformation has become a major challenge in the era of increasing digital information, requiring the development of effective detection methods.
no code implementations • 22 Jan 2024 • Fatma Shalabi, Hichem Felouat, Huy H. Nguyen, Isao Echizen
In this paper, we investigate the ability of LVLMs to detect multimodal OOC and show that these models cannot achieve high accuracy on OOC detection tasks without fine-tuning.
no code implementations • 16 Jan 2024 • Zhicheng Dou, Yuchen Guo, Ching-Chun Chang, Huy H. Nguyen, Isao Echizen
In this paper, we present a comprehensive analysis of the impact of prompts on the text generated by LLMs and highlight the potential lack of robustness in one of the current state-of-the-art GPT detectors.
1 code implementation • 12 Jan 2024 • Folco Bertini Baldassini, Huy H. Nguyen, Ching-Chung Chang, Isao Echizen
A new approach to linguistic watermarking of language models is presented in which information is imperceptibly inserted into the output text while preserving its readability and original meaning.
no code implementations • 11 Jan 2024 • Barry Shichen Hu, Siyun Liang, Johannes Paetzold, Huy H. Nguyen, Isao Echizen, Jiapeng Tang
To avoid these limitations, we first unify the design choices in previous works and then propose a simplified Transformer-based model to extract richer and more robust geometric features for the surface normal estimation task.
no code implementations • 13 Dec 2023 • Yuyang Sun, Huy H. Nguyen, Chun-Shien Lu, Zhiyong Zhang, Lu Sun, Isao Echizen
The growing diversity of digital face manipulation techniques has led to an urgent need for a universal and robust detection technology to mitigate the risks posed by malicious forgeries.
no code implementations • 2 Oct 2023 • Huy H. Nguyen, Junichi Yamagishi, Isao Echizen
In this paper, we challenge the conventional belief that supervised ImageNet-trained models have strong generalizability and are suitable for use as feature extractors in deepfake detection.
no code implementations • 27 Sep 2023 • Lukas Strack, Futa Waseda, Huy H. Nguyen, Yinqiang Zheng, Isao Echizen
To address this problem, we are the first to investigate defense strategies against adversarial patch attacks on infrared detection, especially human detection.
no code implementations • 7 Dec 2022 • Yuyang Sun, Zhiyong Zhang, Isao Echizen, Huy H. Nguyen, Changzhen Qiu, Lu Sun
We introduce a method for detecting manipulated videos that is based on the trajectory of the facial region displacement.
no code implementations • 18 Oct 2022 • Huy H. Nguyen, Trung-Nghia Le, Junichi Yamagishi, Isao Echizen
The results raise the alarm about the robustness of such systems and suggest that master vein attacks should be considered an important security measure.
no code implementations • 28 Jun 2022 • Trung-Nghia Le, Ta Gu, Huy H. Nguyen, Isao Echizen
We have investigated a new application of adversarial examples, namely location privacy protection against landmark recognition systems.
no code implementations • 29 Dec 2021 • Futa Waseda, Sosuke Nishikawa, Trung-Nghia Le, Huy H. Nguyen, Isao Echizen
Deep neural networks are vulnerable to adversarial examples (AEs), which have adversarial transferability: AEs generated for the source model can mislead another (target) model's predictions.
no code implementations • 25 Nov 2021 • Khanh-Duy Nguyen, Huy H. Nguyen, Trung-Nghia Le, Junichi Yamagishi, Isao Echizen
However, there is still a lack of comprehensive research on both methodologies and datasets.
no code implementations • 8 Sep 2021 • Huy H. Nguyen, Sébastien Marcel, Junichi Yamagishi, Isao Echizen
Previous work has proven the existence of master faces, i. e., faces that match multiple enrolled templates in face recognition systems, and their existence extends the ability of presentation attacks.
no code implementations • ICCV 2021 • Trung-Nghia Le, Huy H. Nguyen, Junichi Yamagishi, Isao Echizen
To promote these new tasks, we have created the first large-scale dataset posing a high level of challenges that is designed with face-wise rich annotations explicitly for face forgery detection and segmentation, namely OpenForensics.
1 code implementation • 17 Apr 2021 • Marc Treu, Trung-Nghia Le, Huy H. Nguyen, Junichi Yamagishi, Isao Echizen
It generates adversarial textures learned from fashion style images and then overlays them on the clothing regions in the original image to make all persons in the image invisible to person segmentation networks.
no code implementations • EMNLP (NLP+CSS) 2020 • Saurabh Gupta, Huy H. Nguyen, Junichi Yamagishi, Isao Echizen
Recent advancements in natural language generation has raised serious concerns.
no code implementations • 15 Jun 2020 • Huy H. Nguyen, Junichi Yamagishi, Isao Echizen, Sébastien Marcel
In this work, we demonstrated that wolf (generic) faces, which we call "master faces," can also compromise face recognition systems and that the master face concept can be generalized in some cases.
no code implementations • 11 Dec 2019 • Huy H. Nguyen, Minoru Kuribayashi, Junichi Yamagishi, Isao Echizen
Deep neural networks (DNNs) have achieved excellent performance on several tasks and have been widely applied in both academia and industry.
no code implementations • 2 Nov 2019 • Rong Huang, Fuming Fang, Huy H. Nguyen, Junichi Yamagishi, Isao Echizen
The rapid development of deep learning techniques has created new challenges in identifying the origin of digital images because generative adversarial networks and variational autoencoders can create plausible digital images whose contents are not present in natural scenes.
no code implementations • 2 Nov 2019 • Rong Huang, Fuming Fang, Huy H. Nguyen, Junichi Yamagishi, Isao Echizen
We experimentally demonstrated the existence of individual adversarial perturbations (IAPs) and universal adversarial perturbations (UAPs) that can lead a well-performed FFM to misbehave.
2 code implementations • 28 Oct 2019 • Huy H. Nguyen, Junichi Yamagishi, Isao Echizen
In this paper, we introduce a capsule network that can detect various kinds of attacks, from presentation attacks using printed images and replayed videos to attacks using fake videos created using deep learning.
no code implementations • 22 Jul 2019 • David Ifeoluwa Adelani, Haotian Mai, Fuming Fang, Huy H. Nguyen, Junichi Yamagishi, Isao Echizen
Advanced neural language models (NLMs) are widely used in sequence generation tasks because they are able to produce fluent and meaningful sentences.
1 code implementation • 17 Jun 2019 • Huy H. Nguyen, Fuming Fang, Junichi Yamagishi, Isao Echizen
The output of one branch of the decoder is used for segmenting the manipulated regions while that of the other branch is used for reconstructing the input, which helps improve overall performance.
no code implementations • PACLIC 2018 • Hoang-Quoc Nguyen-Son, Ngoc-Dung T. Tieu, Huy H. Nguyen, Junichi Yamagishi, Isao Echizen
We have developed a method for extracting the coherence features from a paragraph by matching similar words in its sentences.
3 code implementations • 26 Oct 2018 • Huy H. Nguyen, Junichi Yamagishi, Isao Echizen
Recent advances in media generation techniques have made it easier for attackers to create forged images and videos.
no code implementations • 12 Apr 2018 • Huy H. Nguyen, Ngoc-Dung T. Tieu, Hoang-Quoc Nguyen-Son, Junichi Yamagishi, Isao Echizen
Making computer-generated (CG) images more difficult to detect is an interesting problem in computer graphics and security.