|Table of Contents|

Object Tracking Based on Transfer Learning Fusion of Dual HOG Feature(PDF)

南京师范大学学报(工程技术版)[ISSN:1006-6977/CN:61-1281/TN]

Issue:
2022年04期
Page:
29-35
Research Field:
计算机科学与技术
Publishing date:

Info

Title:
Object Tracking Based on Transfer Learning Fusion of Dual HOG Feature
Author(s):
Huang Xuxia1Lin Shubin2
(1.College of Information Technology and Engineering,Ningde Vocational and Technology College,Ningde 355000,China)
(2.College of Computer Science,Minnan Normal University,Zhangzhou 363000,China)
Keywords:
computer visionobject trackingVGG networkHOG featurestransfer learning
PACS:
TP391.4
DOI:
10.3969/j.issn.1672-1292.2022.04.004
Abstract:
Object tracking is one of the key technologies in computer vision, which is applied in image processing,pattern recognition,automatic control and other fields. Deep learning tracking algorithm has good performance. However,in the case of fast motion,the low-level HOG feature is easily affected and the tracking performance is weak. This paper proposes a robust tracking method based on offline training depth feature. Through offline training of VGG model,constructing double HOG features online and making optimal selection, the features extracted from offline training to online are transfered and integrated with the optimal HOG feature response. Firstly,the VGG network is trained layer by layer offline,and the convolution layer is responsible for extracting convolution features. Then,the HOG feature of the current frame in the object area is extracted and decomposed into HOG1 and HOG2. The optimal feature is selected for the two HOG features processed by filtering. Finally,the feature response graph is calculated by integrating convolution feature response and HOG feature response,that is,the predicted new object position. Comparing OTB-2013 and OTB-2015 benchmark data sets with other six algorithms,experimental results show that this method has good performance in dealing with fast motion,background clutter,deformation and other tracking aspects.

References:

[1]尹宏鹏,陈波,柴毅,等. 基于视觉的目标检测与跟踪综述[J]. 自动化学报,2016,42(10):1466-1489.
[2]刘艺,李蒙蒙,郑奇斌,等. 视频目标跟踪算法综述[J]. 计算机科学与探索,2022,16(7):1504-1515.
[3]ZOU Q,ZHANG Z,LI Q Q,et al. DeepCrack:learning hierarchical convolutional features for crack detection[J]. IEEE Transactions on Image Processing a Publication of the IEEE Signal Processing Society,2018,28(3):1498-1512.
[4]LUKEI A,T VOJÍ L,EHOVIN L,et al. Discriminative correlation filter with channel and spatial reliability[J]. International Journal of Computer Vision,2018,126(7):671-688.
[5]BOLME D S,BEVERIDGE J R,DRAPER B A,et al. Visual object tracking using adaptive correlation filters[C]//The Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition. San Franciso,CA,USA,2010:13-18.
[6]李冬冬. 基于相关滤波器和卷积神经网络的视觉跟踪方法研究[D]. 长沙:国防科技大学,2018.
[7]WEI C,ZHANG K,LIU Q. Robust visual tracking via patch based kernel correlation filters with adaptive multiple feature ensemble[J]. Neurocomputing,2016,214:607-617.
[8]MARTIN D,GUSTAV H,FAHAD S K,et al. Learning spatially regularized correlation filters for visual tracking[C]//IEEE International Conference on Computer Vision. Plaza Foyer,BC,USA,2015.
[9]MARTIN D,GUSTAV H,FAHAD S K,et al. Discriminative scale space tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2016,39(8):1561-1575.
[10]MARTIN D,GUSTAV H,FAHAD S K,et al. Adaptive decontamination of the training set:A unified formulation for discriminative visual tracking[C]//International Conference on Computer Vision and Pattern Recognition. Las Vegas,Nevada,USA,2016:1430-1438.
[11]BERTINETTO L,VALMADRE J,GOLODETZ S,et al. Staple:complementary learners for real-time tracking[C]//International Conference on Computer Vision and Pattern Recognition. Las Vegas,Nevada,USA,2016:1401-1409.
[12]WANG Q,GAO J,XING J,et al. DCFNet:Discriminant correlation filters network for visual tracking[J]. arXiv Preprint arXiv:1704.04057,2017.
[13]ADAM A,RIVLIN E,SHIMSHONI I. Robust fragments-based tracking using the integral histogram[C]//IEEE Computer Society Conference on Computer Vision and Pattern Recognition. New York,NY,USA,2006,1:798-805.
[14]CHANG L,FENG D P,WU H F,et al. Multi-cue adaptive correlation filters for visual tracking[C]//6th International Conference on Digital Home. Guangzhou,China,2016:89-94.
[15]SIMONYAN K,ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[J]. arXiv Preprint arXiv:1409.1556,2014.
[16]MA C,HUANG J B,YANG X,et al. Robust visual tracking via hierarchical convolutional features[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2018,41(11):2709-2723.
[17]NAM H,HAN B. Learning multi-domain convolutional neural networks for visual tracking[C]//IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas,Nevada,USA,2016:4293-4302.
[18]CHAO M,HUANG J B,YANG X,et al. Hierarchical convolutional features for visual tracking[C]//IEEE International Conference on Computer Vision. Santiago,Chile,2015:3074-3082.
[19]林淑彬,丁飞飞,杨文元. 融合双HOG特征和颜色特征的目标跟踪[J]. 江苏科技大学学报(自然科学版),2020,34(4):64-70.
[20]游梓童,吴福明,赵淼,等. 融合高阶信息增强模块的复杂背景植物叶片图像分类[J]. 南京师范大学学报(工程技术版),2022,22(3):45-52.
[21]YI W,LIM J,YANG M H. Online object tracking:a benchmark[C]//IEEE Conference on Computer Vision and Pattern Recognition. Portland,Maine,USA,2013:2411-2418.
[22]WU Y,LIM J,YANG M H. Object tracking benchmark[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2015,37(9):1834-1848.
[23]聂瑜,陈春梅,刘桂华. 基于VGG16改进的特征检测器[J]. 信息与控制,2021,50(4):483-489.
[24]宋建辉,孙晓南,刘晓阳,等. 融合HOG特征和注意力模型的孪生目标跟踪算法[J/OL]. 控制与决策,2021:1-9[2022-10-27]. https://doi.org/10.13195/j.kzyjc.2021.1235.

Memo

Memo:
-
Last Update: 2022-12-15