Constrained Convolutional Neural Networks: A New Approach Towards General Purpose Image Manipulation Detection

Identifying the authenticity and processing history of an image is an important task in multimedia forensics. By analyzing traces left by different image manipulations, researchers have been able to develop several algorithms capable of detecting targeted editing operations. While this approach has led to the development of several successful forensic algorithms, an important problem remains: creating forensic detectors for different image manipulations is a difficult and time consuming process. Furthermore, forensic analysts need ‘general purpose’ forensic algorithms capable of detecting multiple different image manipulations. In this paper, we address both of these problems by proposing a new general purpose forensic approach using convolutional neural networks (CNNs). While CNNs are capable of learning classification features directly from data, in their existing form they tend to learn features representative of an image’s content. To overcome this issue, we have developed a new type of CNN layer, called a constrained convolutional layer, that is able to jointly suppress an image’s content and adaptively learn manipulation detection features. Through a series of experiments, we show that our proposed constrained CNN is able to learn manipulation detection features directly from data. Our experimental results demonstrate that our CNN can detect multiple different editing operations with up to 99.97% accuracy and outperform the existing state-of-the-art general purpose manipulation detector. Furthermore, our constrained CNN can still accurately detect image manipulations in realistic scenarios where there is a source camera model mismatch between the training and testing data

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here