[1]张宇苏,吴小俊,李 辉,等.基于无监督深度学习的红外图像与可见光图像融合算法[J].南京师范大学学报(工程技术版),2023,23(01):001-9.[doi:10.3969/j.issn.1672-1292.2023.01.001]
 Zhang Yusu,Wu Xiaojun,Li Hui,et al.Infrared Image and Visible Image Fusion Algorithm Based on Unsupervised Deep Learning[J].Journal of Nanjing Normal University(Engineering and Technology),2023,23(01):001-9.[doi:10.3969/j.issn.1672-1292.2023.01.001]
点击复制

基于无监督深度学习的红外图像与可见光图像融合算法
分享到:

南京师范大学学报(工程技术版)[ISSN:1006-6977/CN:61-1281/TN]

卷:
23卷
期数:
2023年01期
页码:
001-9
栏目:
计算机科学与技术
出版日期:
2023-03-15

文章信息/Info

Title:
Infrared Image and Visible Image Fusion Algorithm Based on Unsupervised Deep Learning
文章编号:
1672-1292(2023)01-0001-09
作者:
张宇苏12吴小俊12李 辉12徐天阳12
(1.江南大学人工智能与计算机学院,江苏 无锡 214122) (2.江南大学江苏省模式识别与计算智能工程实验室,江苏 无锡 214122)
Author(s):
Zhang Yusu12Wu Xiaojun12Li Hui12Xu Tianyang12
(1.School of Artificial Intelligence and Computer Science,Jiangnan University,Wuxi 214122,China) (2.Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence,Jiangnan University,Wuxi 214122,China)
关键词:
图像融合可见光图像红外图像无监督学习卷积神经网络
Keywords:
image fusionvisible imageinfrared imageunsupervised learningconvolutional neural network
分类号:
TP391.4
DOI:
10.3969/j.issn.1672-1292.2023.01.001
文献标志码:
A
摘要:
红外和可见光图像表征了互补的场景信息. 现有的基于深度学习的融合方法大多通过独立提取网络分别提取两个源图像特征,从而丢失了源图像之间的深度特征联系. 基于此,提出了一种新的基于无监督深度学习的红外图像与可见光图像融合算法,针对不同模态的特点采用不同的编码方式提取图像特征,利用一个模态的信息补充另一个模态的信息,并对提取到的特征进行融合,最后根据融合特征重建融合图像. 该算法可在两个模态的特征提取路径之间建立交互,不仅可预融合梯度信息和强度信息,且能增强后续处理的信息. 同时设计了损失函数,引导模型保留可见光的细节纹理,并保持红外的强度分布. 将所提算法与多种融合算法在公开数据集上进行对比实验,结果表明,所提算法获得了良好的视觉效果,客观指标评价方面对比现有的优秀算法也有一定的提升.
Abstract:
Infrared and visible images represent complementary scene information. Most of the existing deep learning-based fusion methods extract the feature of the two source images through independent extraction networks, which leads to the loss of deep feature relationships between source images. To solve this problem, a new infrared and visible image fusion algorithm based on unsupervised deep learning is proposed. Specifically, the proposed algorithm adopts different encoding approaches to extract image features according to the characteristics of different modalities, and uses the information of one modality to supplement that of another one. Then, the extracted features are fused, and finally the fused image is reconstructed according to the fused features. The algorithm can establish an interaction between the feature extraction paths of the two modalities, which can not only pre-fuse gradient information and intensity information, but also enhance the information for subsequent processing. A loss function is designed to guide the model to preserve the detailed texture of visible image and retain the intensity distribution of infrared image. The proposed algorithm is compared with a variety of fusion algorithms on the public dataset. The experimental results show that the proposed algorithm has achieved good visual effects, and that the objective evaluation is also improved compared with the existing excellent algorithms.

参考文献/References:

[1]LI S T,KANG X D,FANG L Y,et al. Pixel-level image fusion:a survey of the state of the art[J]. Information Fusion,2017,33:100-112.
[2]汪前进,朱斌,李存华. 基于特征点的图像拼接方法的研究与应用[J]. 南京师范大学学报(工程技术版),2016,16(3):48-53.
[3]ZHANG H,XU H,TIAN X,et al. Image fusion meets deep learning:a survey and perspective[J]. Information Fusion,2021,76:323-336.
[4]唐聪,凌永顺,杨华,等. 基于深度学习的红外与可见光决策级融合跟踪[J]. 激光与光电子学进展,2019,56(7):209-216.
[5]MA J Y,MA Y,LI C. Infrared and visible image fusion methods and applications:a survey[J]. Information Fusion,2019,45:153-178.
[6]游梓童,吴福明,赵淼,等. 融合高阶信息增强模块的复杂背景植物叶片图像分类[J]. 南京师范大学学报(工程技术版),2022,22(3):45-52.
[7]李莹,朱文艳,袁飞,等. 基于形态学和平均梯度的小波图像融合算法[J]. 南京师范大学学报(工程技术版),2013,13(4):76-81.
[8]CHEN J,LI X J,LUO L B,et al. Infrared and visible image fusion based on target-enhanced multiscale transform decomposition[J]. Information Sciences,2020,508:64-78.
[9]LI H,WU X J,KITTLER J. MDLatLRR:a novel decomposition method for infrared and visible image fusion[J]. IEEE Transactions on Image Processing,2020,29:4733-4746.
[10]LIU Y,CHEN X,WARD R K,et al. Image fusion with convolutional sparse representation[J]. IEEE Signal Processing Letters,2016,23(12):1882-1886.
[11]ZHANG Q,LIU Y,BLUM R S,et al. Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images:a review[J]. Information Fusion,2018,40:57-75.
[12]LI H,LIU L,HUANG W,et al. An improved fusion algorithm for infrared and visible images based on multi-scale transform[J]. Infrared Physics & Technology,2016,74:28-37.
[13]BAVIRISETTI D P,XIAO G,LIU G. Multi-sensor image fusion based on fourth order partial differential equations[C]//Proceedings of the 20th International Conference on Information Fusion(Fusion). Xi'an,China:IEEE,2017.
[14]NAIDU A R,BHAVANA D,REVANTH P,et al. Fusion of visible and infrared images via saliency detection using two-scale image decomposition[J]. International Journal of Speech Technology,2020,23(4):815-824.
[15]MA J L,ZHOU Z Q,WANG B,et al. Infrared and visible image fusion based on visual saliency map and weighted least square optimization[J]. Infrared Physics & Technology,2017,82:8-17.
[16]HUANG G,LIU Z,VAN DER MAATEN L,et al. Densely connected convolutional networks[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). Honolulu,USA:IEEE,2017.
[17]LIU Y,CHEN X,PENG H,et al. Multi-focus image fusion with a deep convolutional neural network[J]. Information Fusion,2017,36:191-207.
[18]LI H,WU X J. DenseFuse:a fusion approach to infrared and visible images[J]. IEEE Transactions on Image Processing,2018,28(5):2614-2623.
[19]MA J Y,YU W,LIANG P W,et al. FusionGAN:a generative adversarial network for infrared and visible image Fusion[J]. Information Fusion,2019,48:11-26.
[20]HE K M,ZHANG X Y,REN S Q,et al. Deep residual learning for image recognition[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). Las Vegas,USA:IEEE,2016.
[21]ZHAO H S,SHI J P,QI X J,et al. Pyramid scene parsing network[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). Honolulu,USA:IEEE,2017.
[22]WANG Z,BOVIK A C,SHEIKH H R,et al. Image quality assessment:from error visibility to structural similarity[J]. IEEE Transactions on Image Processing,2004,13(4):600-612.
[23]TOET A. TNO image fusion dataset,2014[EB/OL]. [2021-02-20]. https://figshare.com/articles/TNO_Image_Fusion_Dataset/1008029.
[24]MA J Y,CHEN C,LI C,et al. Infrared and visible image fusion via gradient transfer and total variation minimization[J]. Information Fusion,2016,31:100-109.
[25]ZHOU Z Q,WANG B,LI S,et al. Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters[J]. Information Fusion,2016,30:15-26.
[26]ZHANG H,MA J Y. SDNet:a versatile squeeze-and-decomposition network for real-time image fusion[J]. International Journal of Computer Vision,2021,129(10):2761-2785.
[27]ZHANG H,XU H,XIAO Y,et al. Rethinking the image fusion:a fast unified image fusion network based on proportional maintenance of gradient and intensity[J]. Proceedings of the AAAI Conference on Artificial Intelligence,2020,34(7):12797-12804.
[28]ZHAO Z X,XU S,ZHANG J S,et al. Efficient and model-based infrared and visible image fusion via algorithm unrolling[J]. IEEE Transactions on Circuits and Systems for Video Technology,2021,32(3):1186-1196.
[29]LI H,WU X J,KITTLER J. RFN-Nest:an end-to-end residual fusion network for infrared and visible images[J]. Information Fusion,2022,73:72-86.
[30]TANG L F,YUAN J T,ZHANG H,et al. PIAFusion:a progressive infrared and visible image fusion network based on illumination aware[J]. Information Fusion,2022,83/84:79-92.
[31]XU H,MA J Y,JIANG J J,et al. U2Fusion:a unified unsupervised image fusion network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2022,44(1):502-518.
[32]RAO Y J. In-fibre Bragg grating sensors[J]. Measurement Science and Technology,1997,8(4):355-375.
[33]HUYNH-THU Q,GHANBARI M. Scope of validity of PSNR in image/video quality assessment[J]. Electronics Letters,2008,44(13):800-801.
[34]SARA U,AKTER M,UDDIN M S. Image quality assessment through FSIM,SSIM,MSE and PSNR—a comparative study[J]. Journal of Computer and Communications,2019,7(3):8-18.
[35]LARSON E C,CHANDLER D M. Most apparent distortion:full-reference image quality assessment and the role of strategy[J]. Journal of Electronic Imaging,2010,19(1):011006.
[36]ASLANTAS V,BENDES E. A new image quality metric for image fusion:the sum of the correlations of differences[J]. AEU—International Journal of Electronics and Communications,2015,69(12):1890-1896.
[37]ROBERTS J W,VAN AARDT J A,AHMED F B. Assessment of image fusion procedures using entropy,image quality,and multispectral classification[J]. Journal of Applied Remote Sensing,2008,2(1):023522.

相似文献/References:

[1]李 莹,朱文艳,袁 飞,等.基于形态学和平均梯度的小波图像融合算法[J].南京师范大学学报(工程技术版),2013,13(04):076.
 Li Ying,Zhu Wenyan,Yuan Fei,et al.Wavelet Image Fusion Algorithm Based on Morphology and Average Grads[J].Journal of Nanjing Normal University(Engineering and Technology),2013,13(01):076.

备注/Memo

备注/Memo:
收稿日期:2022-09-15.
基金项目:国家自然科学基金资助项目(62020106012、U1836218、62106089)、教育部111项目(B12018).
通讯作者:吴小俊,博士,教授,博士生导师,研究方向:模式识别、计算智能、计算机视觉和信息融合. E-mail:wu_xiaojun@jiangnan.edu.cn
更新日期/Last Update: 2023-03-15