参考文献/References:
[1]LIU W,ANGUELOV D,ERHAN D,et al. Ssd:single shot multibox detector[C]//European Conference on Computer Vision. Berlin:Springer,2016:21-37.
[2]PAVAN K A V,GABRIEL J,ZHU J,et al. An improved one millisecond mobile backbone[J/OL]. arXiv Preprint arXiv:2206.04040,2022.
[3]GIRSHICK R,DONAHUE J,DARRELL T,et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Columbus,Ohio,USA:IEEE,2014:580-587.
[4]HE K,ZHANG X,REN S,et al. Spatial pyramid pooling in deep convolutional networks for visual recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2015,37(9):1904-1916.
[5]GIRSHICK R. Fast R-CNN[C]//Proceedings of the IEEE International Conference on Computer Vision. Santiago,Chile:IEEE,2015:1440-1448.
[6]REN S Q,HE K M,GIRSHICK R,et al. Faster R-CNN:Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2017,39(6):1137-1149.
[7]REDMON J,DIVVALA S,GIRSHICK R,et al. You only look once:unified,real-time object detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas,Nevada,USA:IEEE,2016:779-788.
[8]WANG C Y,BOCHKOVSKIY A,LIAO H Y M. YOLOv7:Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vancouver,Canada:IEEE,2023:7464-7475.
[9]IANDOLA F N,HAN S,MOSKEWICZ M W,et al. Squeezenet:Alexnet-level accuracy with 50x fewer parameters and<0.5 MB model size[J/OL]. arXiv Preprint arXiv:1602.07360,2016.
[10]HOWARD A G,ZHU M,CHEN B,et al. Mobilenets:efficient convolutional neural networks for mobile vision applications[J/OL]. arXiv Preprint arXiv:1704.04861,2017.
[11]SANDLER M,HOWARD A,ZHU M,et al. MobileNetV2:inverted residuals and linear bottlenecks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City,Utah,USA:IEEE,2018:4510-4520.
[12]HOWARD A,SANDLER M,CHU G,et al. Searching for mobilenetv3[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. Long Beach,California,USA:IEEE,2019:1314-1324.
[13]SZEGEDY C,VANHOUCKE V,IOFFE S,et al. Rethinking the inception architecture for computer vision[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas,Nevada,USA:IEEE,2016:2818-2826.
[14]ZHANG X,ZHOU X,LIN M,et al. Shufflenet:An extremely efficient convolutional neural network for mobile devices[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City,Utah,USA:IEEE,2018:6848-6856.
[15]CAI H,ZHU L,HAN S. Proxylessnas:Direct neural architecture search on target task and hardware[J/OL]. arXiv Preprint arXiv:1812.00332,2018.
[16]CAI H,GAN C,WANG T,et al. Once-for-all:train one network and specialize it for efficient deployment[J/OL]. arXiv Preprint arXiv:1908.09791,2019.
[17]DING X,ZHANG X,MA N,et al. Repvgg:making vgg-style convnets great again[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Virtual:IEEE,2021:13733-13742.
[18]FRANKLE J,CARBIN M. The lottery ticket hypothesis:finding sparse,trainable neural networks[J/OL]. arXiv Preprint arXiv:1803.03635,2018.
[19]LECUN Y,DENKER J,SOLLA S. Optimal brain damage[J]. Advances in Neural Information Processing Systems,1990,2(279):598-605.
[20]HAN S,POOL J,TRAN J,et al. Learning both weights and connections for efficient neural network[C]//Proceedings of the 28th International Conference on Neural Information Processing Systems. Montreal,Canada:NIPS,2015:1135-1143.
[21]LIU Z,LI J,SHEN Z,et al. Learning efficient convolutional networks through network slimming[C]//Proceedings of the IEEE International Conference on Computer Vision. Venice,Italy:IEEE,2017:2736-2744.
[22]MOLCHANOV P,TYREE S,KARRAS T,et al. Pruning convolutional neural networks for resource efficient inference[J/OL]. arXiv Preprint arXiv:1611.06440,2017.