|Table of Contents|

Collaborative Detection System for Distraction Behavior Based on Lightweight Network and Embedded Platform(PDF)

南京师范大学学报(工程技术版)[ISSN:1006-6977/CN:61-1281/TN]

Issue:
2023年01期
Page:
25-32
Research Field:
计算机科学与技术
Publishing date:

Info

Title:
Collaborative Detection System for Distraction Behavior Based on Lightweight Network and Embedded Platform
Author(s):
Li Shaofan12Gao Shangbing12Zhang Yingying12Huang Xiang1Yang Suqiang1Guo Xiaoyu1
(1.Faculty of Computer and Software Engineering,Huaiyin Institute of Technology,Huai'an 223001,China) (2.Laboratory for Internet of Things and Mobile Internet Technology of Jiangsu Province,Huaiyin Institute of Technology,Huai'an 223001,China)
Keywords:
collaborative detectionhuman object interactionlightweight networkintelligent transportationdeep learning
PACS:
TP391
DOI:
10.3969/j.issn.1672-1292.2023.01.004
Abstract:
Distracted driving is the main cause of traffic accident. In order to solve the problems of fewer kinds of distracted driving detection and poor detection efficiency, a collaborative detection system for distraction behavior based on the lightweight network and embedded platform is proposed. First of all, a lightweight object detection network YOLO-Ghost is proposed by combining Ghost module and channel attention mechanism, the CSPGBottleck is proposed to build GhostDarknet as the backbone network, and a multi-feature fusion module SE-FPN with a multi-scale attention mechanism is proposed for feature fusion. A more comprehensive CIOU(complete-IOU)function is considered as the loss function. YOLO-Ghost is used to identify and locate local features, and APJ(anchor position judge)is proposed to judge manual distraction behavior. Secondly, MobileNetv3 and YOLO-Ghost are used to perform face key point regression and gaze estimation. Finally, the detected multimodal information is used to jointly determine the current driving state of the driver. The experimental results show that the YOLO-Ghost achieves the higher accuracy and speed than other main stream methods. At the same time, when the algorithm is deployed to the embedded device, it obtains 20FPS real-time detection performance on the NVIDIA Jetson TX1 and the accuracy and real-time performance reach the detection requirements.

References:

[1]王飞,胡荣林,金鹰. 基于3D-CBAM注意力机制的人体动作识别[J]. 南京师范大学学报(工程技术版),2021,21(1):49-56.
[2]DONAHUE J,HENDRICKS L A,GUADARRAMA S,et al. Long-term recurrent convolutional networks for visual recognition and description[C]//Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). Boston,USA:IEEE,2015.
[3]TRAN D,BOURDEV L,FERGUS R,et al. Learning spatiotemporal features with 3d convolutional networks[C]//Proceedings of the 2015 IEEE International Conference on Computer Vision(CVPR). Santiago,Chile:IEEE,2015.
[4]GUO G D,LAI A. A survey on still image based human action recognition[J]. Pattern Recognition,2014,47(10):3343-3361.
[5]YAN C,COENEN F,ZHANG B L. Driving posture recognition by joint application of motion history image and pyramid histogram of oriented gradients[J]. International Journal of Vehicular Technology,2014,846:719413.
[6]SHARMA G,JURIE F,SCHMID C. Discriminative spatial saliency for image classification[C]//Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). Providence,USA:IEEE,2012.
[7]KOESDWIADY A,BEDAWI S M,OU C,et al,End-to-end deep learning for driver distraction recognition[C]//Proceedings of the 2017 International Conference on Image Analysis and Recognition. Montreal,Canada:Springer,2017.
[8]HU Y C,LU M Q,LU X B. Driving behaviour recognition from still images by usingmulti-stream fusion CNN[J]. Machine Vision and Applications,2019,30(5):851-865.
[9]OU C J,OUALI C,KARRAY F. Transfer learning based strategy for improving driver distraction recognition[C]//Proceedings of the 2018 International Conference on Image Analysis and Recognition. Pvoa de Varzim,Portugal:Springer,2018.
[10]BAHETI B,GAJRE S,TALBAR S. Detection of distracted driver using convolutional neural network[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops(CVPRW). Salt Lake City,USA:IEEE,2018.
[11]LE T H N,ZHENG Y,ZHU C C,et al. Multiple scale faster-RCNN approach to driver's cell-phone usage and hands on steering wheel detection[C]//Proceedings of the IEEE Conferenceon Computer Vision and Pattern Recognition Workshops(CVPRW). Las Vegas,USA:IEEE,2016.
[12]梁秦嘉,刘怀,陆飞. 基于改进YOLOv3的运动目标分类检测算法研究[J]. 南京师范大学学报(工程技术版),2021,21(4):27-32.
[13]LIN T Y,DOLLÁR P,GIRSHICK R,et al. Feature pyramid networks for object detection[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). Honolulu,USA:IEEE,2017.
[14]LIU S,QI L,QIN H F,et al. Path aggregation network for instance segmentation[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City,USA:IEEE,2018.
[15]YUN S D,HAN D Y,CHUN S H,et al. CutMix:regularization strategy to train strong classifiers with localizable features[C]//Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision(ICCV). Seoul,Korea:IEEE,2019.
[16]REZATOFIGHI H,TSOI N,GWAK J Y,et al. Generalized intersection over union:a metric and a loss for bounding box regression[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Long Beach,USA:IEEE,2019.
[17]HAN K,WANG Y H,TIAN Q,et al. GhostNet:more features from cheap operations[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Seattle,USA:IEEE,2020.
[18]HU J,SHEN L,ALBANIE S,et al. Squeeze-and-excitation networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2018,42(8):2011-2023.
[19]殷业瑜,高家全,李莹. 面向印花图案检索的特征融合方法研究[J]. 南京师大学报(自然科学版),2022,45(2):118-125.
[20]REN S Q,HE K M,GIRSHICK R,et al. Faster R-CNN:towards real time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2017,39(6):1137-1149.
[21]LIU W,ANGUELOV D,ERHAN D,et al. SSD:single shot multibox detector[C]//Proceedings of the 14th European Conference on Computer Vision. Amsterdam,The Netherlands:Springer,2016.
[22]ZHOU X Y,WANG D Q,KRÄHENBÜHL P. Objects as points[EB/OL]. arXiv Preprint arXiv:1904.07850v2,2019.
[23]LIN T Y,GOYAL P,GIRSHICK R,et al. Focal loss for dense object detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2020,42(2):318-327.
[24]JOCHER G. Yolov5[EB/OL]. [2020-08-09]. https://github. com/ultralyc-s/yolov5.

Memo

Memo:
-
Last Update: 2023-03-15