[1]孙云云,江朝晖,单桂朋,等.最优距离聚类和特征融合表达的关键帧提取[J].南京理工大学学报(自然科学版),2018,42(04):416.[doi:10.14177/j.cnki.32-1397n.2018.42.04.005]
 Sun Yunyun,Jiang Zhaohui,Shan Guipeng,et al.Key frame extraction based on optimal distance clusteringand feature fusion expression[J].Journal of Nanjing University of Science and Technology,2018,42(04):416.[doi:10.14177/j.cnki.32-1397n.2018.42.04.005]
点击复制

最优距离聚类和特征融合表达的关键帧提取()
分享到:

《南京理工大学学报》(自然科学版)[ISSN:1005-9830/CN:32-1397/N]

卷:
42卷
期数:
2018年04期
页码:
416
栏目:
出版日期:
2018-08-30

文章信息/Info

Title:
Key frame extraction based on optimal distance clusteringand feature fusion expression
文章编号:
1005-9830(2018)04-0416-08
作者:
孙云云1江朝晖12单桂朋1刘海秋1饶 元12
1.安徽农业大学 信息与计算机学院,安徽 合肥 230036; 2.农业部农业物联网技术集成与应用重点实验室,安徽 合肥 230036
Author(s):
Sun Yunyun1Jiang Zhaohui12Shan Guipeng2Liu Haiqiu1Rao Yuan12
1.School of Information and Computer Science,Anhui Agricultural University,Hefei 230036,China; 2.Key Laboratory of Technology Integration and Application in Agricultural Internet of Things,Ministry of Agriculture,P.R.China,Hefei 230036,China
关键词:
监测视频 关键帧提取 最优距离阈值 无监督聚类 特征融合
Keywords:
monitoring video key frame extraction optimal distance threshold unsupervised clustering feature fusion
分类号:
TP391
DOI:
10.14177/j.cnki.32-1397n.2018.42.04.005
摘要:
为了提高视频关键帧提取的质量和效率,提出一种基于最优距离聚类和特征融合表达的视频关键帧提取算法。在视频帧间差异性分析基础上,寻找并确定最优帧间距离阈值,采用无监督聚类算法对帧间距离进行聚类,获得类别数目最优的类图像集; 计算图像的颜色复杂度和信息熵并融合,按照类中图像特征值“平均”的思想提取类代表帧,组成视频关键帧。对4个监测视频进行实验,结果显示:该算法提取关键帧的平均保真度为96.72%、平均压缩率为96.42%,运行时间也较短,与两种典型的基于聚类的关键帧提取方法相比,在相同的压缩率情况下,算法保真度大幅度提高,而运行时间较小或相当。该算法解决了无监督聚类对阈值的依赖性问题,兼顾了视频中运动目标变化和环境异常两种情况,具有良好的性能和适应性。
Abstract:
In order to extract key frames from the monitoring video accurately and efficiently,a key frame extracting algorithm which is based on optimal distance threshold clustering and feature fusion expression is proposed. In order to obtain the frame class image set with optimal clustering number,we analyze the differences between frames of the video,and determine the optimal distance threshold which is used for unsupervised clustering of inter-frame distances. In order to extract the representative frame of each cluster,we calculate and merge color complexity and information entropy,and extracte representative frame based on‘cluster average’concept. Representative frame extracted from each cluster is assigned to the key frame image set. Test results of the four monitoring videos show that,the average fidelity and average compression ratio are 96.72% and 96.42%,and the running time is shorter. Compared with the two typical algorithms based on clustering,the fidelity of the proposed algorithm is greatly improved,while the running time is smaller or equivalent,when the compression rate is the same. This algorithm solves the problem of the dependency of unsupervised clustering on the threshold and takes moving target changes and environment anomaly into account,having good performance and adaptability.

参考文献/References:

[1] Xia G,Sun H,Niu X,et al. Keyframe extraction for human motion capture data based on joint kernel sparse representation[J]. IEEE Transactions on Industrial Electronics,2017,64(2):1589-1599.
[2]Gu L,Liu J,Qu A. Performance evaluation and scheme selection of shot boundary detection and keyframe extraction in content-based video retrieval[J]. International Journal of Digital Crime & Forensics,2017,9(4):15-29.
[3]Ma J,Li X,Wen H,et al. A key frame extraction method for processing greenhouse vegetables production monitoring video[J]. Computers & Electronics in Agriculture,2015,111(C):92-102.
[4]王璐,高林,闫磊,等. 基于光流与熵统计法的花卉生长视频关键帧提取算法[J]. 农业工程学报,2012,28(17):125-130.
Wang Lu,Gao Lin,Yan Lei,et al. Key-frame retrieval method based on optical flow and entropy statistic for blooming video[J]. Transactions of the Chinese Society of Agricultural Engineering,2012,28(17):125-130.
[5]高林,王璐,闫磊,等. 基于关键帧提取技术的花开过程视频监测系统开发及试验[J]. 农业工程学报,2014,30(1):121-128.
Gao Lin,Wang Lu,Yan Lei,et al. Development and experiment of blooming video monitoring system based on key frame extraction method[J]. Transactions of the Chinese Society of Agricultural Engineering,2014,30(1):121-128.
[6]刘云鹏,张三元,王仁芳,等. 视觉注意模型的道路监控视频关键帧提取[J]. 中国图象图形学报,2013,18(8):933-943.
Liu Yunpeng,Zhang Sanyuan,Wang Renfang,et al. Key frame extraction based on the visual attention model for lane surveillance video[J]. Journal of Image & Graphics,2013,18(8):933-943.
[7]Zheng R,Yao C,Jin H,et al. Parallel key frame extraction for surveillance video service in a smart city[J]. Plos One,2015,10(8):e0135694.
[8]沈铮,吴薇. 基于视频图像的公交车人群异常情况检测[J]. 南京理工大学学报,2017,41(1):65-73.
Shen Zheng,Wu Wei.Video-based abnormal crowd behavior detection on bus[J]. Journal of Nanjing University of Science and Technology,2017,41(1):65-73.
[9]Zhou L,Nagahashi H. Real-time action recognition based on key frame detection[C]//The International Conference on Machine Learning and Computing. New York,NY:ACM,2017:272-277.
[10]顾静秋,王志海,高荣华,等. 融合图像与运动量的奶牛行为识别方法[J]. 农业机械学报,2017,48(6):145-151.
Gu Jingqiu,Wang Zhihai,Gao Ronghua,et al. Recognition of cow behavior based on image analysis and activities[J]. Transactions of the Chinese Society for Agricultural Machinery,2017,48(6):145-151.
[11]陈彩文,杜永贵,周超,等. 基于图像纹理特征的养殖鱼群摄食活动强度评估[J]. 农业工程学报,2017,33(5):232-237.
Chen Caiwen,Du Yonggui,Zhou Chao,et al. Evaluation of feeding activity of shoal based on image texture[J]. Transactions of the Chinese Society of Agricultural Engineering,2017,33(5):232-237.
[12]Liu H,Hao H. Key frame extraction based on improved hierarchical clustering algorithm[C]//International Conference on Fuzzy Systems & Knowl. Amoy,China:IEEE,2014:793-797.
[13]Shi F,Guo X. Keyframe extraction based on Kmeans results to adjacent DC images similarity[C]//International Conference on Signal Processing Systems.Dalian,China:IEEE,2010:V1-611-V1-613.
[14]张婵,高新波,姬红兵. 视频关键帧提取的可能性C-模式聚类算法[J]. 计算机辅助设计与图形学学报,2005,17(9):2040-2045.Zhang Chan,Gao Xinbo,Ji Hongbing. Video key frame extraction algorithm based on possibilistic C-patterns clustering[J]. Journal of Computer-Aided Design & Computer Graphics,2005,17(9):2040-2045.
[15]罗森林,马舒洁,梁静,等. 基于子镜头聚类方法的关键帧提取技术[J]. 北京理工大学学报,2011,31(3):348-352.
Luo Senlin,Ma Shujie,Liang Jing,et al. Method of key frame extraction based on sub-shot clustering[J]. Transactions of Beijing Institute of Technology,2011,31(3):348-352.
[16]汪荣贵,胡健根,杨娟,等. 映射结合聚类的视频关键帧提取[J]. 中国图象图形学报,2016,21(12):1652-1661.
Wang Ronggui,Hu Jiangen,Yang Juan,et al. Video key frame extraction based on mapping and clustering[J]. Journal of Image & Graphics,2016,21(12):1652-1661.
[17]Ejaz N,Mehmood I,Baik S W. Feature aggregation based visual attention model for video summarization[J]. Computers & Electrical Engineering,2014,40(3):993-1005.
[18]蒋鹏,秦小麟. 基于视觉注意模型的自适应视频关键帧提取[J]. 中国图象图形学报,2009,14(8):1650-1655.
Jiang Peng,Qin Xiaolin. Adaptive key-frames extraction based on visual attention model[J]. Journal of Image & Graphics,2009,14(8):1650-1655.
[19]刘云根,刘金刚. 重建误差最优化的运动捕获数据关键帧提取[J]. 计算机辅助设计与图形学学报,2010,22(4):670-675.
Liu Yungen,Liu Jingang. Keyframe extraction from motion capture data by optimal reconstruction error[J]. Journal of Computer-Aided Design & Computer Graphics,2010,22(4):670-675.
[20]Ferreira L,Cruz L A D S,Assuncao P. Towards key-frame extraction methods for 3D video:A review[J]. Eurasip Journal on Image & Video Processing,2016,2016(1):28-47.
[21]Lowe D G. Distincitive image features from scale-invariant keypoints[J]. International Journal of Computer Vision,2004,60(2):91-110.
[22]Yeung M,Yeo B L,Liu B. Segmentation of video by clustering and graph analysis[M]. Elsevier Science Inc,1998.
[23]李毅,孙正兴,远博,等. 一种改进的帧差和背景减相结合的运动检测方法[J]. 中国图象图形学报,2009,14(6):1162-1168.
Li Yi,Sun Zhengxing,Yuan Bo,et al. An improved method for motion detection by frame difference and background subtraction[J]. Journal of Image & Graphics,2009,14(6):1162-1168.
[24]Angadi S,Naik V. Entropy based fuzzy C means clustering and key frame extraction for sports video summarization[C]//Fifth International Conference on Signal and Image Processing. Washington D C,USA:IEEE,2014:271-279.
[25]Zhang Q,Yu S P,Zhou D S,et al. An efficient method of key-frame extraction based on a cluster algorithm[J]. Journal of Human Kinetics,2013,39(1):5-13.
[26]蓝章礼,帅丹,李益才. 基于帧间相关性的道路监控视频关键帧提取[J]. 微电子学与计算机,2015,32(5):51-54.
Lan Zhangli,Shuai Dan,Li Yicai. Key frame extraction of road monitoring video based on inter-frame correlation[J]. Microelectronics & Computer,2015,32(5):51-54.
[27]Liu D,Shyu M L,Chen C,et al. Integration of global and local information in videos for key frame extraction[M]. Washington D C,USA:IEEE,2010.
[28]訾玲玲,丛鑫. 一种图像序列的区域导向帧插值算法[J]. 小型微型计算机系统,2015,36(9):2120-2124.
Zi Lingling,Cong Xin. Region-guided frame interpola-tion algorithm for image sequences[J]. Journal of Chinese Computer Systems,2015,36(9):2120-2124.
[29]Song X,Fan G. Joint key-frame extraction and object-based video segmentation[J]. IEEE Transactions on Circuits & Systems for Video Technology,2005,15(7):869-884.
[30]王方石,须德,吴伟鑫. 基于自适应阈值的自动提取关键帧的聚类算法[J]. 计算机研究与发展,2005,42(10):1752-1757.
Wang Fangshi,Xu De,Wu Weixin. A cluster algorithm of automatic key frame extraction based on adaptive threshold[J]. Journal of Computer Research and Development,2005,42(10):1752-1757.
[31]马浚诚. 面向叶部病害识别的设施蔬菜监控视频关键帧提取方法研究[D]. 中国农业大学研究生院,2016.
[32]钱晓亮,郭雷,韩军伟,等. 视觉显著性检测:一种融合长期和短期特征的信息论算法[J]. 电子与信息学报,2013,35(7):1636-1643.
Qian Xiaoliang,Guo Lei,Han Junwei,et al. Visual saliency detection:an information theoretic algorithm combined long-term with short-term features[J]. Journal of Electronics & Information Technology,2013,35(7):1636-1643.
[33]Pan R,Tian Y,Wang Z. Key-frame extraction based on clustering[C]//International Conference on Progress in Informatics and Computing. Shanghei,China:IEEE,2011:867-871.
[34]潘磊,束鑫,程科,等. 基于压缩感知和熵计算的关键帧提取算法[J]. 光电子·激光,2014,25(10):1977-1982.
Pan Lei,Shu Xin,Cheng Ke,et al. A key frame extraction algorithm based on compressive sensing and entropy computing[J]. Journal of Optoelectronics Laser,2014,25(10):1977-1982.
[35]Gianluigi C,Raimondo S. An innovative algorithm for key frame extraction in video summarization[J]. Journal of Real-Time Image Processing,2006,1(1):69-88.
[36]Chang H S,Sull S,Sang U L. Efficient video indexing scheme for content-based retrieval[J]. IEEE Transactions on Circuits & Systems for Video Technology,1999,9(8):1269-1279.
[37]Liu T,Zhang X,Feng J,et al. Shot reconstruction degree:a novel criterion for key frame selection[J]. Pattern Recognition Letters,2004,25(12):1451-1457.
[38]郭晓军.网络视频流发现及关键帧提取相关技术研究[D]. 西安理工大学计算机学院,2009.

备注/Memo

备注/Memo:
收稿日期:2017-08-18 修回日期:2018-01-25
基金项目:安徽省科技攻关项目(1501031102); 农业部农业物联网技术集成与应用重点实验室开放基金(2016KL01); 安徽农业大学2018年度研究生创新基金(2018yjs-63)
作者简介:孙云云(1993-),女,硕士生,主要研究方向:信息检测与图像处理,E-mail:sunyunyun0910@sina.com;
通讯作者:江朝晖(1968-),男,教授,主要研究方向:农业信息学,E-mail:jiangzh@ahau.edu.cn。
引文格式:孙云云,江朝晖,单桂朋,等. 最优距离聚类和特征融合表达的关键帧提取[J]. 南京理工大学学报,2018,42(4):416-423. 投稿网址:http://zrxuebao.njust.edu.cn
更新日期/Last Update: 2018-08-30