This paper designs a new detail injection based convolutional neural network (DiCNN) framework for pansharpening with the MS details being directly formulated in end-to-end manners, where the first detail injections based CNN mines MS details through the PAN image and the MS image, and the second one utilizes only the PAN picture.
Pansharpening aims to fuse a multispectral (MS) image with an associated panchromatic (PAN) image, producing a composite image with the spectral resolution of the former and the spatial resolution of the latter. Traditional pansharpening methods can be ascribed to a unified detail injection context, which views the injected MS details as the integration of PAN details and bandwise injection gains. In this paper, we design a new detail injection based convolutional neural network (DiCNN) framework for pansharpening with the MS details being directly formulated in end-to-end manners, where the first detail injection based CNN (DiCNN1) mines MS details through the PAN image and the MS image, and the second one (DiCNN2) utilizes only the PAN image. The main advantage of the proposed DiCNNs is that they provide explicit physical interpretations and can achieve fast convergence while achieving high pansharpening quality. Furthermore, the effectiveness of the proposed approaches is also analyzed from a relatively theoretical point of view. Our methods are evaluated via experiments on real MS image datasets, achieving excellent performance when compared to other state-of-the-art methods.
Lin He
1 papers
Yizhou Rao
1 papers
Jiawei Zhu
1 papers
Bo Li
1 papers