[1]周少康,宋晓宁,於东军.基于区域候选的粗-精行人检测方法[J].南京理工大学学报(自然科学版),2020,44(03):272-277.[doi:10.14177/j.cnki.32-1397n.2020.44.03.003]
 Zhou Shaokang,Song Xiaoning,Yu Dongjun.Coarse-to-fine method of pedestrian detection based on region proposal[J].Journal of Nanjing University of Science and Technology,2020,44(03):272-277.[doi:10.14177/j.cnki.32-1397n.2020.44.03.003]
点击复制

基于区域候选的粗-精行人检测方法
分享到:

《南京理工大学学报》(自然科学版)[ISSN:1005-9830/CN:32-1397/N]

卷:
44卷
期数:
2020年03期
页码:
272-277
栏目:
出版日期:
2020-06-30

文章信息/Info

Title:
Coarse-to-fine method of pedestrian detection based on region proposal
文章编号:
1005-9830(2020)03-0272-06
作者:
周少康1宋晓宁1於东军2
1.江南大学 物联网工程学院,江苏 无锡 214122; 2.南京理工大学 计算机科学与工程学院,江苏 南京 210094
Author(s):
Zhou Shaokang1Song Xiaoning1Yu Dongjun2
1.School of Internet of Things Engineering,Jiangnan University,Wuxi 214122,China; 2.School of Computer Science and Engineering,Nanjing University of Science and Technology,Nanjing 210094,China
关键词:
区域候选网络 行人检测 局部无关通道特征 k均值算法 卷积网络 级联分类器 平均对数漏检率
Keywords:
regional proposal network pedestrian detection locally decorrelated channel features k-means algorithm convolution network cascade classifier log-average miss rate
分类号:
TP183
DOI:
10.14177/j.cnki.32-1397n.2020.44.03.003
摘要:
为了解决行人检测过程中漏检的问题,提出一种将传统检测方法与区域候选网络相结合的方法。运用局部无关通道特征(LDCF)方法对图片进行粗检测,筛选出在训练集上漏检的窗口。采用k均值(k-means)算法对数据集中漏检的目标框进行聚类,得到合适的尺度与长宽比。针对相应的尺度与长宽比训练区域候选网络(RPN),提高粗检测阶段的召回率。利用改进的颜色自相似特征以及简化的卷积网络结构对窗口特征进行更为准确的描述。使用改进的深度网络提取特征,并训练级联分类器,对粗检窗口进行精细判断。在行人检测数据集TUD-Brussels和Caltech上进行实验,得到的平均对数漏检率分别为46%和9%。
Abstract:
In order to solve the problem of candidate window leakage in pedestrian detection,a pedestrian detection method combining the traditional detection method with the improved region proposal network is proposed. The method of locally decorrelated channel features(LDCF)is used to carry out rough detection of pedestrians,and then filter out the missing window on the training set. The k-means algorithm is used to cluster the missing target frames in the dataset,and the appropriate scale and aspect ratio are obtained. Aiming at the corresponding scale and aspect ratio training region proposal network(RPN),the recall rate of rough detection stage is improved. The improved color self similar feature and the simplified convolution network structure are used to describe the window features more accurately. The improved deep network is used to extract features,and the cascade classifier is trained,making a fine judgment on the rough candidate window. The log-average miss rate obtained by this method on TUD-Brussels and Caltech datasets are respectively reduced to 46% and 9%.

参考文献/References:

[1] Dalal N,Triggs B. Histograms of oriented gradients for human detection[C]//IEEE Computer Society Conference on Computer Vision & Pattern Recognition(CVPR'05). Piscataway,NJ,USA:IEEE,2005:886-893.
[2]Dollár P,Appel R,Belongie S,et al. Fast feature pyramids for object detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2014,36(8):1532-1545.
[3]Nam W,Dollár P,Han J H. Local decorrelation for improved pedestrian detection[C]//Advances in Neural Information Processing Systems. New York,NY,USA:Curran Associates,2014:424-432.
[4]Ren Shaoqing,He Kaiming,Girshick R,et al. Faster R-CNN:Towards real-time object detection with region proposal networks[C]//Advances in Neural Information Processing Systems. New York,NY,USA:Curran Associates,2015:91-99.
[5]Li Jianan,Liang Xiaodan,Shen Shengmei,et al. Scale-aware fast R-CNN for pedestrian detection[J]. IEEE Transactions on Multimedia,2018,20(4):985-996.
[6]Cai Zhaowei,Fan Quanfu,Feris R S,et al. A unified multi-scale deep convolutional neural network for fast object detection[EB/OL]https://www.researchgate.net/publication/305638531_A_Unified_Multi-scale_Deep_Convolutional_Neural_Network_for_Fast_Object_Detection,2020-05-20.
[7]Girshick R. Fast R-CNN[C]//Proceedings of the IEEE International Conference on Computer Vision. Piscataway,NJ,USA:IEEE,2015:1440-1448.
[8]Zhang Liliang,Lin Liang,Liang Xiaodan,et al. Is faster R-CNN doing well for pedestrian detection?[EB/OL]https://www.researchgate.net/publication/305638304_Is_Faster_R-CNN_Doing_Well_for_Pedestrian_Detection,2020-05-20.
[9]任汉俊,宋晓宁,於东军.一种新型粗-精表达策略行人检测方法[J]. 南京理工大学学报,2017,41(5):646-652.
Ren Hanjun,Song Xiaoning,Yu Dongjun. Novel pedestrian detection method based on coarse-to-fine representation strategy[J]. Journal of Nanjing University of Science and Technology,2017,41(5):646-652.
[10]Simonyan K,Zisserman A. Very deep convolutional networks for large-scale image recognition[J]. arXiv Preprint:1409.1556,2014.
[11]顾会建,陈俊周. 基于改进颜色自相似特征的行人检测方法[J]. 计算机应用,2014,34(7):2033-2035.
Gu Huijian,Chen Junzhou. Pedestrian detection based on improved color self-similarity feature[J]. Journal of Computer Applications,2014,34(7):2033-2035.
[12]Ke Wei,Zhang Yao,Wei Pengxu,et al. Pedestrian detection via PCA filters based convolutional channel features[C]//2015 IEEE International Conference on Acoustics,Speech and Signal Processing(ICASSP). Piscataway,NJ,USA:IEEE,2015:1394-1398.
[13]Girshick R,Donahue J,Darrell T,et al. R-CNN:Region-based convolutional neural networks[C]//CVPR. Piscataway,NJ,USA:IEEE,2014:2-9.
[14]Dollar P,Wojek C,Schiele B,et al. Pedestrian detection:An evaluation of the state of the art[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2012,34(4):743-761.
[15]Sermanet P,Kavukcuoglu K,Chintala S,et al. Pedestrian detection with unsupervised multi-stage feature learning[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway,NJ,USA:IEEE,2013:3626-3633.
[16]Ouyang Wanli,Wang Xiaogang. Single-pedestrian detection aided by multi-pedestrian detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway,NJ,USA:IEEE,2013:3198-3205.
[17]Zhang Shanshan,Yang Jian,Schiele B. Occluded pedestrian detection through guided attention in CNNs[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway,NJ,USA:IEEE,2018:6995-7003.

备注/Memo

备注/Memo:
收稿日期:2019-05-18 修回日期:2020-01-06
基金项目:国家重点研发计划(2017YFC1601800); 国家自然科学基金(61876072); 中国博士后科学基金(2018T110441); 江苏省自然科学基金(BK20161135); 江苏省“六大人才高峰”项目(XYDXX-012)
作者简介:周少康(1993-),男,硕士生,主要研究方向:人工智能与模式识别,E-mail:skangzhou@qq.com; 通讯作者:宋晓宁(1975-),男,博士,教授,博士生导师,主要研究方向:人工智能与模式识别,E-mail:x.song@jiangnan.edu.cn。
引文格式:周少康,宋晓宁,於东军. 基于区域候选的粗-精行人检测方法[J]. 南京理工大学学报,2020,44(3):272-277.
投稿网址:http://zrxuebao.njust.edu.cn
更新日期/Last Update: 2020-06-30