Digital image forensics is a relatively brand new discipline, which aims at authenticating images by recovering their inherent information. It is categorized into two class, passive forensics and active forensics, by the type of the retrieved information. Both forensic approaches are required according to the purpose and characteristics. In this dissertation, we propose two methods based on deep neural networks, one for passive forensics and the other for active forensics.
Median filtering is used as an anti-forensic technique to erase processing history of some image manipulations such as JPEG, resampling, etc. Thus, various methods have been proposed to detect median filtered images. To counter these techniques, several anti-forensic methods have been devised as well. However, restoring the median filtered image is a typical ill-posed problem, and therefore it is still difficult to reconstruct the image visually close to the original image. Also, it is further hard to make the restored image have the statistical characteristic of the raw image for the anti-forensic purpose. To solve this problem, we present a median filtering anti-forensic method based on deep convolutional neural networks (CNNs), which can effectively remove traces from median filtered images. We adopt the framework of generative adversarial networks (GANs) to generate images that follow the underlying statistics of unaltered images, significantly enhancing forensic undetectability. Through extensive experiments, we demonstrate that our method successfully deceives the existing median filtering forensic techniques.
Digital image watermarking is a kind of forensic technique to protect copyrighted works, which covertly embeds a noise-tolerant signal (e.g., copyright ownership) into an image. Although it has steadily grown over the past two decades, there is much room for improvement. Most of the previous watermarking algorithms were devised heuristically on the basis of the knowledge of specific individual attacks. Thus, they are suboptimal and cannot guarantee the robustness against a variety of attacks with varying strength. It is inefficient to design a new method every time an attack changes, and it does not always guarantee its effectiveness. To address this issue, in this work, we introduce an attack-adaptive watermarking based on deep neural networks. Due to the non-differentiability of existing attacks, it is infeasible to train the networks in an end-to-end manner. Therefore, we define an attack simulator as a differentiable layer, which consists of various attacks including typical non-differentiable attacks, e.g., JPEG compression and affine transform, as well as differentiable ones, e.g., gamma correction, Gaussian noise, blurring, etc. Experimental results show that our technique outperforms the compared methods by a large margin with respect to robustness.