no code implementations • 23 Jan 2024 • Chandrakanth Gudavalli, Erik Rosten, Lakshmanan Nataraj, Shivkumar Chandrasekaran, B. S. Manjunath
Content creation and image editing can benefit from flexible user controls.
1 code implementation • 4 Nov 2022 • Tajuddin Manhar Mohammed, Lakshmanan Nataraj, Satish Chikkagoudar, Shivkumar Chandrasekaran, B. S. Manjunath
The number of malware is constantly on the rise.
no code implementations • 8 Nov 2021 • Tajuddin Manhar Mohammed, Lakshmanan Nataraj, Satish Chikkagoudar, Shivkumar Chandrasekaran, B. S. Manjunath
Malicious PDF documents present a serious threat to various security organizations that require modern threat intelligence platforms to effectively analyze and characterize the identity and behavior of PDF malware.
no code implementations • 8 Nov 2021 • Lakshmanan Nataraj, Tajuddin Manhar Mohammed, Tejaswi Nanjundaswamy, Satish Chikkagoudar, Shivkumar Chandrasekaran, B. S. Manjunath
In this paper, we propose a novel and orthogonal malware detection (OMD) approach to identify malware using a combination of audio descriptors, image similarity descriptors and other static/statistical features.
no code implementations • 4 Sep 2021 • Lakshmanan Nataraj, Chandrakanth Gudavalli, Tajuddin Manhar Mohammed, Shivkumar Chandrasekaran, B. S. Manjunath
In this paper, we propose a two-step method to detect and localize seam carved images.
no code implementations • 28 Aug 2021 • Chandrakanth Gudavalli, Erik Rosten, Lakshmanan Nataraj, Shivkumar Chandrasekaran, B. S. Manjunath
Seam carving is a popular technique for content aware image retargeting.
no code implementations • 12 Apr 2021 • Lakshmanan Nataraj, Michael Goebel, Tajuddin Manhar Mohammed, Shivkumar Chandrasekaran, B. S. Manjunath
While most detection methods in literature focus on detecting a particular type of manipulation, it is challenging to identify doctored images that involve a host of manipulations.
1 code implementation • 19 Mar 2021 • Michael Goebel, Jason Bunk, Srinjoy Chattopadhyay, Lakshmanan Nataraj, Shivkumar Chandrasekaran, B. S. Manjunath
Machine Learning (ML) algorithms are susceptible to adversarial attacks and deception both during training and deployment.
1 code implementation • 26 Jan 2021 • Tajuddin Manhar Mohammed, Lakshmanan Nataraj, Satish Chikkagoudar, Shivkumar Chandrasekaran, B. S. Manjunath
Motivated by the visual similarity of these images for different malware families, we compare our deep neural network models with standard image features like GIST descriptors to evaluate the performance.
no code implementations • 20 Jul 2020 • Michael Goebel, Lakshmanan Nataraj, Tejaswi Nanjundaswamy, Tajuddin Manhar Mohammed, Shivkumar Chandrasekaran, B. S. Manjunath
Recent advances in Generative Adversarial Networks (GANs) have led to the creation of realistic-looking digital images that pose a major challenge to their detection by humans or computers.
no code implementations • 15 Mar 2019 • Lakshmanan Nataraj, Tajuddin Manhar Mohammed, Shivkumar Chandrasekaran, Arjuna Flenner, Jawadul H. Bappy, Amit K. Roy-Chowdhury, B. S. Manjunath
The advent of Generative Adversarial Networks (GANs) has brought about completely novel ways of transforming and manipulating pixels in digital images.
1 code implementation • 6 Mar 2019 • Jawadul H. Bappy, Cody Simons, Lakshmanan Nataraj, B. S. Manjunath, Amit K. Roy-Chowdhury
This paper proposes a high-confidence manipulation localization architecture which utilizes resampling features, Long-Short Term Memory (LSTM) cells, and encoder-decoder network to segment out manipulated regions from non-manipulated ones.
no code implementations • 1 Mar 2018 • Arjuna Flenner, Lawrence Peterson, Jason Bunk, Tajuddin Manhar Mohammed, Lakshmanan Nataraj, B. S. Manjunath
A deep learning classifier is then used to generate a heatmap that indicates if the image block has been resampled.
no code implementations • 9 Feb 2018 • Tajuddin Manhar Mohammed, Jason Bunk, Lakshmanan Nataraj, Jawadul H. Bappy, Arjuna Flenner, B. S. Manjunath, Shivkumar Chandrasekaran, Amit K. Roy-Chowdhury, Lawrence Peterson
Realistic image forgeries involve a combination of splicing, resampling, cloning, region removal and other methods.
no code implementations • ICCV 2017 • Jawadul H. Bappy, Amit K. Roy-Chowdhury, Jason Bunk, Lakshmanan Nataraj, B. S. Manjunath
In order to formulate the framework, we employ a hybrid CNN-LSTM model to capture discriminative features between manipulated and non-manipulated regions.
1 code implementation • 3 Jul 2017 • Jason Bunk, Jawadul H. Bappy, Tajuddin Manhar Mohammed, Lakshmanan Nataraj, Arjuna Flenner, B. S. Manjunath, Shivkumar Chandrasekaran, Amit K. Roy-Chowdhury, Lawrence Peterson
In this paper, we propose two methods to detect and localize image manipulations based on a combination of resampling features and deep learning.