Skip to main content

Adversarial Learning for Constrained Image Splicing Detection and Localization Based on Atrous Convolution

By
Yaqi Liu; Xiaobin Zhu; Xianfeng Zhao; Yun Cao

Constrained image splicing detection and localization (CISDL), which investigates two input suspected images and identifies whether one image has suspected regions pasted from the other, is a newly proposed challenging task for image forensics. In this paper, we propose a novel adversarial learning framework to learn a deep matching network for CISDL. Our framework mainly consists of three building blocks. First, a deep matching network based on atrous convolution (DMAC) aims to generate two high-quality candidate masks, which indicate suspected regions of the two input images. In DMAC, atrous convolution is adopted to extract features with rich spatial information, a correlation layer based on a skip architecture is proposed to capture hierarchical features, and atrous spatial pyramid pooling is constructed to localize tampered regions at multiple scales. Second, a detection network is designed to rectify inconsistencies between the two corresponding candidate masks. Finally, a discriminative network drives the DMAC network to produce masks that are hard to distinguish from ground-truth ones. The detection network and the discriminative network collaboratively supervise the training of DMAC in an adversarial way. Besides, a sliding window-based matching strategy is investigated for high-resolution images matching. Extensive experiments, conducted on five groups of datasets, demonstrate the effectiveness of the proposed framework and the superior performance of DMAC.