[1]黄戌霞,林淑彬.迁移学习融合双HOG特征的目标跟踪[J].南京师范大学学报(工程技术版),2022,22(04):029-35.[doi:10.3969/j.issn.1672-1292.2022.04.004]
 Huang Xuxia,Lin Shubin.Object Tracking Based on Transfer Learning Fusion of Dual HOG Feature[J].Journal of Nanjing Normal University(Engineering and Technology),2022,22(04):029-35.[doi:10.3969/j.issn.1672-1292.2022.04.004]
点击复制

迁移学习融合双HOG特征的目标跟踪
分享到:

南京师范大学学报(工程技术版)[ISSN:1006-6977/CN:61-1281/TN]

卷:
22卷
期数:
2022年04期
页码:
029-35
栏目:
计算机科学与技术
出版日期:
2022-12-15

文章信息/Info

Title:
Object Tracking Based on Transfer Learning Fusion of Dual HOG Feature
文章编号:
1672-1292(2022)04-0029-07
作者:
黄戌霞1林淑彬2
(1.宁德职业技术学院信息技术与工程学院,福建 宁德 355000)
(2.闽南师范大学计算机学院,福建 漳州 363000)
Author(s):
Huang Xuxia1Lin Shubin2
(1.College of Information Technology and Engineering,Ningde Vocational and Technology College,Ningde 355000,China)
(2.College of Computer Science,Minnan Normal University,Zhangzhou 363000,China)
关键词:
计算机视觉目标跟踪VGG网络HOG特征迁移学习
Keywords:
computer visionobject trackingVGG networkHOG featurestransfer learning
分类号:
TP391.4
DOI:
10.3969/j.issn.1672-1292.2022.04.004
文献标志码:
A
摘要:
目标跟踪是计算机视觉的关键技术之一,应用于模式识别、自动控制等领域. 深度学习的跟踪算法具有良好的性能,但在快速运动情况下,低层HOG特征易受影响,跟踪性能较弱. 提出一种结合线下训练深度特征的鲁棒跟踪方法. 通过线下训练VGG模型,线上构造双HOG特征并进行最优选择,将线下训练提取的特征迁移到线上,与最优HOG特征响应融合. 首先,线下逐层训练VGG网络,卷积层负责提取卷积特征. 然后,在线提取当前帧目标区域的HOG特征,并分解为HOG1和HOG2,对其进行滤波处理,选择最优特征. 最后,融合卷积特征响应和HOG最优特征响应得到特征响应图,预测目标的新位置. 在OTB-2013、OTB-2015基准数据集上与其他6个算法对比. 结果表明,该方法在处理快速运动、背景混乱、形变等跟踪方面具有良好的性能.
Abstract:
Object tracking is one of the key technologies in computer vision, which is applied in image processing,pattern recognition,automatic control and other fields. Deep learning tracking algorithm has good performance. However,in the case of fast motion,the low-level HOG feature is easily affected and the tracking performance is weak. This paper proposes a robust tracking method based on offline training depth feature. Through offline training of VGG model,constructing double HOG features online and making optimal selection, the features extracted from offline training to online are transfered and integrated with the optimal HOG feature response. Firstly,the VGG network is trained layer by layer offline,and the convolution layer is responsible for extracting convolution features. Then,the HOG feature of the current frame in the object area is extracted and decomposed into HOG1 and HOG2. The optimal feature is selected for the two HOG features processed by filtering. Finally,the feature response graph is calculated by integrating convolution feature response and HOG feature response,that is,the predicted new object position. Comparing OTB-2013 and OTB-2015 benchmark data sets with other six algorithms,experimental results show that this method has good performance in dealing with fast motion,background clutter,deformation and other tracking aspects.

参考文献/References:

[1]尹宏鹏,陈波,柴毅,等. 基于视觉的目标检测与跟踪综述[J]. 自动化学报,2016,42(10):1466-1489.
[2]刘艺,李蒙蒙,郑奇斌,等. 视频目标跟踪算法综述[J]. 计算机科学与探索,2022,16(7):1504-1515.
[3]ZOU Q,ZHANG Z,LI Q Q,et al. DeepCrack:learning hierarchical convolutional features for crack detection[J]. IEEE Transactions on Image Processing a Publication of the IEEE Signal Processing Society,2018,28(3):1498-1512.
[4]LUKEI A,T VOJÍ L,EHOVIN L,et al. Discriminative correlation filter with channel and spatial reliability[J]. International Journal of Computer Vision,2018,126(7):671-688.
[5]BOLME D S,BEVERIDGE J R,DRAPER B A,et al. Visual object tracking using adaptive correlation filters[C]//The Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition. San Franciso,CA,USA,2010:13-18.
[6]李冬冬. 基于相关滤波器和卷积神经网络的视觉跟踪方法研究[D]. 长沙:国防科技大学,2018.
[7]WEI C,ZHANG K,LIU Q. Robust visual tracking via patch based kernel correlation filters with adaptive multiple feature ensemble[J]. Neurocomputing,2016,214:607-617.
[8]MARTIN D,GUSTAV H,FAHAD S K,et al. Learning spatially regularized correlation filters for visual tracking[C]//IEEE International Conference on Computer Vision. Plaza Foyer,BC,USA,2015.
[9]MARTIN D,GUSTAV H,FAHAD S K,et al. Discriminative scale space tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2016,39(8):1561-1575.
[10]MARTIN D,GUSTAV H,FAHAD S K,et al. Adaptive decontamination of the training set:A unified formulation for discriminative visual tracking[C]//International Conference on Computer Vision and Pattern Recognition. Las Vegas,Nevada,USA,2016:1430-1438.
[11]BERTINETTO L,VALMADRE J,GOLODETZ S,et al. Staple:complementary learners for real-time tracking[C]//International Conference on Computer Vision and Pattern Recognition. Las Vegas,Nevada,USA,2016:1401-1409.
[12]WANG Q,GAO J,XING J,et al. DCFNet:Discriminant correlation filters network for visual tracking[J]. arXiv Preprint arXiv:1704.04057,2017.
[13]ADAM A,RIVLIN E,SHIMSHONI I. Robust fragments-based tracking using the integral histogram[C]//IEEE Computer Society Conference on Computer Vision and Pattern Recognition. New York,NY,USA,2006,1:798-805.
[14]CHANG L,FENG D P,WU H F,et al. Multi-cue adaptive correlation filters for visual tracking[C]//6th International Conference on Digital Home. Guangzhou,China,2016:89-94.
[15]SIMONYAN K,ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[J]. arXiv Preprint arXiv:1409.1556,2014.
[16]MA C,HUANG J B,YANG X,et al. Robust visual tracking via hierarchical convolutional features[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2018,41(11):2709-2723.
[17]NAM H,HAN B. Learning multi-domain convolutional neural networks for visual tracking[C]//IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas,Nevada,USA,2016:4293-4302.
[18]CHAO M,HUANG J B,YANG X,et al. Hierarchical convolutional features for visual tracking[C]//IEEE International Conference on Computer Vision. Santiago,Chile,2015:3074-3082.
[19]林淑彬,丁飞飞,杨文元. 融合双HOG特征和颜色特征的目标跟踪[J]. 江苏科技大学学报(自然科学版),2020,34(4):64-70.
[20]游梓童,吴福明,赵淼,等. 融合高阶信息增强模块的复杂背景植物叶片图像分类[J]. 南京师范大学学报(工程技术版),2022,22(3):45-52.
[21]YI W,LIM J,YANG M H. Online object tracking:a benchmark[C]//IEEE Conference on Computer Vision and Pattern Recognition. Portland,Maine,USA,2013:2411-2418.
[22]WU Y,LIM J,YANG M H. Object tracking benchmark[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2015,37(9):1834-1848.
[23]聂瑜,陈春梅,刘桂华. 基于VGG16改进的特征检测器[J]. 信息与控制,2021,50(4):483-489.
[24]宋建辉,孙晓南,刘晓阳,等. 融合HOG特征和注意力模型的孪生目标跟踪算法[J/OL]. 控制与决策,2021:1-9[2022-10-27]. https://doi.org/10.13195/j.kzyjc.2021.1235.

相似文献/References:

[1]沈世斌,谢 非,牛友臣,等.基于混合高斯模型优化的运动人体跟踪方法[J].南京师范大学学报(工程技术版),2019,19(01):051.[doi:10.3969/j.issn.1672-1292.2019.01.007]
 Shen Shibin,Xie Fei,Niu Youchen,et al.A Moving Human Body Tracking Method Based onOptimized Gaussian Mixture Model[J].Journal of Nanjing Normal University(Engineering and Technology),2019,19(04):051.[doi:10.3969/j.issn.1672-1292.2019.01.007]
[2]欧丰林,吴慧君,杨文元.视频目标跟踪的颜色特征学习率优化分析[J].南京师范大学学报(工程技术版),2019,19(03):059.[doi:10.3969/j.issn.1672-1292.2019.03.009]
 Ou Fenglin,Wu Huijun,Yang Wenyuan.Optimization Analysis of Target Tracking Learning Rate via Color Feature[J].Journal of Nanjing Normal University(Engineering and Technology),2019,19(04):059.[doi:10.3969/j.issn.1672-1292.2019.03.009]
[3]董春燕,刘 怀,梁秦嘉,等.自适应特征融合的核相关滤波目标跟踪算法研究[J].南京师范大学学报(工程技术版),2020,20(03):050.[doi:10.3969/j.issn.1672-1292.2020.03.009]
 Dong Chunyan,Liu Huai,Liang Qinjia,et al.Kernelized Correlation Filter Tracking AlgorithmBased on Adaptive Feature Fusion[J].Journal of Nanjing Normal University(Engineering and Technology),2020,20(04):050.[doi:10.3969/j.issn.1672-1292.2020.03.009]
[4]董春燕,刘 怀,梁秦嘉.基于核相关滤波的TLD跟踪算法的研究[J].南京师范大学学报(工程技术版),2020,20(04):037.[doi:10.3969/j.issn.1672-1292.2020.04.006]
 Dong Chunyan,Liu Huai,Liang Qinjia.TLD Object Tracking Algorithm Based on Kernelized Correlation Filter[J].Journal of Nanjing Normal University(Engineering and Technology),2020,20(04):037.[doi:10.3969/j.issn.1672-1292.2020.04.006]
[5]张 明,翟俊海,许 垒,等.长尾识别研究进展[J].南京师范大学学报(工程技术版),2022,22(02):063.[doi:10.3969/j.issn.1672-1292.2022.02.010]
 Zhang Ming,Zhai Junhai,Xu Lei,et al.Research Advance in Long-tailed Recognition[J].Journal of Nanjing Normal University(Engineering and Technology),2022,22(04):063.[doi:10.3969/j.issn.1672-1292.2022.02.010]

备注/Memo

备注/Memo:
收稿日期:2022-03-16.
基金项目:福建省中青年教师教育科研项目(JAT220752).
通讯作者:林淑彬,硕士,实验师,研究方向:计算机视觉和模式识别. E-mail:greenkure@163.com
更新日期/Last Update: 2022-12-15