|Table of Contents|

Infrared Image and Visible Image Fusion Algorithm Based on Unsupervised Deep Learning(PDF)

南京师范大学学报(工程技术版)[ISSN:1006-6977/CN:61-1281/TN]

Issue:
2023年01期
Page:
1-9
Research Field:
计算机科学与技术
Publishing date:

Info

Title:
Infrared Image and Visible Image Fusion Algorithm Based on Unsupervised Deep Learning
Author(s):
Zhang Yusu12Wu Xiaojun12Li Hui12Xu Tianyang12
(1.School of Artificial Intelligence and Computer Science,Jiangnan University,Wuxi 214122,China) (2.Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence,Jiangnan University,Wuxi 214122,China)
Keywords:
image fusionvisible imageinfrared imageunsupervised learningconvolutional neural network
PACS:
TP391.4
DOI:
10.3969/j.issn.1672-1292.2023.01.001
Abstract:
Infrared and visible images represent complementary scene information. Most of the existing deep learning-based fusion methods extract the feature of the two source images through independent extraction networks, which leads to the loss of deep feature relationships between source images. To solve this problem, a new infrared and visible image fusion algorithm based on unsupervised deep learning is proposed. Specifically, the proposed algorithm adopts different encoding approaches to extract image features according to the characteristics of different modalities, and uses the information of one modality to supplement that of another one. Then, the extracted features are fused, and finally the fused image is reconstructed according to the fused features. The algorithm can establish an interaction between the feature extraction paths of the two modalities, which can not only pre-fuse gradient information and intensity information, but also enhance the information for subsequent processing. A loss function is designed to guide the model to preserve the detailed texture of visible image and retain the intensity distribution of infrared image. The proposed algorithm is compared with a variety of fusion algorithms on the public dataset. The experimental results show that the proposed algorithm has achieved good visual effects, and that the objective evaluation is also improved compared with the existing excellent algorithms.

References:

[1]LI S T,KANG X D,FANG L Y,et al. Pixel-level image fusion:a survey of the state of the art[J]. Information Fusion,2017,33:100-112.
[2]汪前进,朱斌,李存华. 基于特征点的图像拼接方法的研究与应用[J]. 南京师范大学学报(工程技术版),2016,16(3):48-53.
[3]ZHANG H,XU H,TIAN X,et al. Image fusion meets deep learning:a survey and perspective[J]. Information Fusion,2021,76:323-336.
[4]唐聪,凌永顺,杨华,等. 基于深度学习的红外与可见光决策级融合跟踪[J]. 激光与光电子学进展,2019,56(7):209-216.
[5]MA J Y,MA Y,LI C. Infrared and visible image fusion methods and applications:a survey[J]. Information Fusion,2019,45:153-178.
[6]游梓童,吴福明,赵淼,等. 融合高阶信息增强模块的复杂背景植物叶片图像分类[J]. 南京师范大学学报(工程技术版),2022,22(3):45-52.
[7]李莹,朱文艳,袁飞,等. 基于形态学和平均梯度的小波图像融合算法[J]. 南京师范大学学报(工程技术版),2013,13(4):76-81.
[8]CHEN J,LI X J,LUO L B,et al. Infrared and visible image fusion based on target-enhanced multiscale transform decomposition[J]. Information Sciences,2020,508:64-78.
[9]LI H,WU X J,KITTLER J. MDLatLRR:a novel decomposition method for infrared and visible image fusion[J]. IEEE Transactions on Image Processing,2020,29:4733-4746.
[10]LIU Y,CHEN X,WARD R K,et al. Image fusion with convolutional sparse representation[J]. IEEE Signal Processing Letters,2016,23(12):1882-1886.
[11]ZHANG Q,LIU Y,BLUM R S,et al. Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images:a review[J]. Information Fusion,2018,40:57-75.
[12]LI H,LIU L,HUANG W,et al. An improved fusion algorithm for infrared and visible images based on multi-scale transform[J]. Infrared Physics & Technology,2016,74:28-37.
[13]BAVIRISETTI D P,XIAO G,LIU G. Multi-sensor image fusion based on fourth order partial differential equations[C]//Proceedings of the 20th International Conference on Information Fusion(Fusion). Xi'an,China:IEEE,2017.
[14]NAIDU A R,BHAVANA D,REVANTH P,et al. Fusion of visible and infrared images via saliency detection using two-scale image decomposition[J]. International Journal of Speech Technology,2020,23(4):815-824.
[15]MA J L,ZHOU Z Q,WANG B,et al. Infrared and visible image fusion based on visual saliency map and weighted least square optimization[J]. Infrared Physics & Technology,2017,82:8-17.
[16]HUANG G,LIU Z,VAN DER MAATEN L,et al. Densely connected convolutional networks[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). Honolulu,USA:IEEE,2017.
[17]LIU Y,CHEN X,PENG H,et al. Multi-focus image fusion with a deep convolutional neural network[J]. Information Fusion,2017,36:191-207.
[18]LI H,WU X J. DenseFuse:a fusion approach to infrared and visible images[J]. IEEE Transactions on Image Processing,2018,28(5):2614-2623.
[19]MA J Y,YU W,LIANG P W,et al. FusionGAN:a generative adversarial network for infrared and visible image Fusion[J]. Information Fusion,2019,48:11-26.
[20]HE K M,ZHANG X Y,REN S Q,et al. Deep residual learning for image recognition[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). Las Vegas,USA:IEEE,2016.
[21]ZHAO H S,SHI J P,QI X J,et al. Pyramid scene parsing network[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). Honolulu,USA:IEEE,2017.
[22]WANG Z,BOVIK A C,SHEIKH H R,et al. Image quality assessment:from error visibility to structural similarity[J]. IEEE Transactions on Image Processing,2004,13(4):600-612.
[23]TOET A. TNO image fusion dataset,2014[EB/OL]. [2021-02-20]. https://figshare.com/articles/TNO_Image_Fusion_Dataset/1008029.
[24]MA J Y,CHEN C,LI C,et al. Infrared and visible image fusion via gradient transfer and total variation minimization[J]. Information Fusion,2016,31:100-109.
[25]ZHOU Z Q,WANG B,LI S,et al. Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters[J]. Information Fusion,2016,30:15-26.
[26]ZHANG H,MA J Y. SDNet:a versatile squeeze-and-decomposition network for real-time image fusion[J]. International Journal of Computer Vision,2021,129(10):2761-2785.
[27]ZHANG H,XU H,XIAO Y,et al. Rethinking the image fusion:a fast unified image fusion network based on proportional maintenance of gradient and intensity[J]. Proceedings of the AAAI Conference on Artificial Intelligence,2020,34(7):12797-12804.
[28]ZHAO Z X,XU S,ZHANG J S,et al. Efficient and model-based infrared and visible image fusion via algorithm unrolling[J]. IEEE Transactions on Circuits and Systems for Video Technology,2021,32(3):1186-1196.
[29]LI H,WU X J,KITTLER J. RFN-Nest:an end-to-end residual fusion network for infrared and visible images[J]. Information Fusion,2022,73:72-86.
[30]TANG L F,YUAN J T,ZHANG H,et al. PIAFusion:a progressive infrared and visible image fusion network based on illumination aware[J]. Information Fusion,2022,83/84:79-92.
[31]XU H,MA J Y,JIANG J J,et al. U2Fusion:a unified unsupervised image fusion network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2022,44(1):502-518.
[32]RAO Y J. In-fibre Bragg grating sensors[J]. Measurement Science and Technology,1997,8(4):355-375.
[33]HUYNH-THU Q,GHANBARI M. Scope of validity of PSNR in image/video quality assessment[J]. Electronics Letters,2008,44(13):800-801.
[34]SARA U,AKTER M,UDDIN M S. Image quality assessment through FSIM,SSIM,MSE and PSNR—a comparative study[J]. Journal of Computer and Communications,2019,7(3):8-18.
[35]LARSON E C,CHANDLER D M. Most apparent distortion:full-reference image quality assessment and the role of strategy[J]. Journal of Electronic Imaging,2010,19(1):011006.
[36]ASLANTAS V,BENDES E. A new image quality metric for image fusion:the sum of the correlations of differences[J]. AEU—International Journal of Electronics and Communications,2015,69(12):1890-1896.
[37]ROBERTS J W,VAN AARDT J A,AHMED F B. Assessment of image fusion procedures using entropy,image quality,and multispectral classification[J]. Journal of Applied Remote Sensing,2008,2(1):023522.

Memo

Memo:
-
Last Update: 2023-03-15