[1]郭东岩,赵春霞,李军侠,等.结合图像显著与灰度不一致性的目标自动提取[J].南京理工大学学报(自然科学版),2014,38(05):701-706.
 Guo Dongyan,Zhao Chunxia,Li Junxia,et al.Image saliency and intensity inhomogeneity based automatic object extraction[J].Journal of Nanjing University of Science and Technology,2014,38(05):701-706.
点击复制

结合图像显著与灰度不一致性的目标自动提取
分享到:

《南京理工大学学报》(自然科学版)[ISSN:1005-9830/CN:32-1397/N]

卷:
38卷
期数:
2014年05期
页码:
701-706
栏目:
出版日期:
2014-10-29

文章信息/Info

Title:
Image saliency and intensity inhomogeneity based automatic object extraction
作者:
郭东岩赵春霞李军侠丁军娣
南京理工大学 计算机科学与工程学院,江苏 南京 210094
Author(s):
Guo DongyanZhao ChunxiaLi JunxiaDing Jundi
School of Computer Science and Engineering,NUST,Nanjing 210094,China
关键词:
图像显著性 灰度不一致性 图像分割 目标提取
Keywords:
image saliency intensity inhomogeneity image segmentation object extraction
分类号:
TP391
摘要:
该文将图像过分割技术与图像显著性相结合,提出了一种基于图像显著性与灰度不一致性的目标自动提取方法。该方法可在没有任何人工干预的情况下准确地提取出图像的感兴趣目标。首先,通过像素不一致性因子和邻域不一致性因子确定出图像的不一致性种子点和一致性种子点; 然后,使用等价类划分的方法对两类种子点分别进行生长,得到不同的等价类,合并残余类之后得到图像的初分割块; 最后由初分割结果结合显著性检测算法提取出完整的感兴趣目标。考虑到像素的局部邻域信息,首次将图像的底层特征——像素的灰度不一致性应用于图像分割,并以此为基础进行目标提取。实验表明,该方法能够有效地实现显著目标的自动提取。
Abstract:
A novel automatic object extraction method is proposed based on the pixel intensity inhomogeneity and the image saliency.Firstly,two kinds of seeds with different inhomogeneities are determined by exploring the pixel inhomogeneity factor(PIF)and the neighborhood inhomogeneity factor(NIF).Secondly,several equivalence classes are formed by seeds growing based on equivalence partitioning.Over-segmentation result of the original image is then obtained after adding the noise points to the nearest equivalence classes.At last,the interested object is extracted by combining the over-segmentation result with the image saliency detection technique.In the method,the intensity inhomogeneity information of pixels is considered and used in image segmentation for the first time.Experimental results demonstrate the effectiveness and robustness of the proposed method in automatically extracting interested objects.

参考文献/References:

[1] Chen L,Xie X,Fan X,et al.A visual attention model for adapting images on small displays[J].Multimedia Systems,2003,9(4):353-364.
[2]Nicolas P,David D,James J.Why is real-world visual object recognition hard?[J].PLoS Computational Biology,2008,4(1):151-156.
[3]Mignotte M.Segmentation by fusion of histogram-based k-means clusters in different color spaces[J].IEEE Transactions on Image Processing,2008,17(5):780-787.
[4]Cai W,Chen S,Zhang D.Fast and robust fuzzy c-means clustering algorithms incorporating local information for image segmentation[J].Pattern Recognition,2007,40(3):825-838.
[5]Comaniciu D,Meer P.Mean shift:a robust approach toward feature space analysis[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2002,24(5):603-619.
[6]Cousty J,Bertrand G,Najman L,et al.Watershed cuts:minimum spanning forests and the drop of water principle[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2009,31(8):1362-1374.
[7]Shi J,Malik J.Normalized cuts and image segmentation[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2000,22(8):888-905.
[8]Ding J,Shen J,Pang H,et al.Exploiting intensity inhomogeneity to extract textured objects from natural scenes[A].Asian Conference on Computer Vision(Part Ⅲ),Lecture Notes in Computes Science[C].Berlin,Germany:Springer,2009,5996:1-10.
[9]周赟,李久贤,夏良正.基于区域生长的红外图像分割[J].南京理工大学学报,2002,26(1):75-78. Zhou Yun,Li Jiuxian,Xia Liangzheng.Infrared image segmentation based on region growing[J].Journal of Nanjing University of Science and Technology,2002,26(1):75-78.
[10]Itti L,Kouch C,Niebur E.A model of saliency-based visual attention for rapid scene analysis[J].IEEE Transactions on Pattern Analysisand Machine Intelligence,1998,20(11):1254-1259.
[11]Achanta R,Hemami S,Estrada F,et al.Frequency-tunedsalient region detection[A].IEEE International Conference on Computer Vision and Pattern Recognition[C].Miami,US:IEEE,2009:1597-1604.
[12]Harel J,Koch C,Perona P.Graph-based visual saliency[A].Advancesin Neural Information Processing Systems[C].Vancouver,Canada:MIT Press,2007,19:545-552.
[13]Borji A,Itti L.Exploiting local and global patch rarities for saliency detection[A].IEEE International Conference on Computer Vision and Pattern Recognition[C].RI,US:IEEE,2012:478-485.
[14]辛月兰.基于Grabcut的图像目标提取[J].青海师范大学学报(自然科学版),2012,28(3):30-34. Xin Yuelan.Image target detection based on the Grabcut[J].Journal of Qinghai Normal University(Natural Science Edition),2012,28(3):30-34.
[15]Alpert S,Basri S,Brandt M.Image segmentationby probabilistic bottom-up aggregation and cue integration[A].IEEE International Conference on CVPR[C].Minnoapolis,US:IEEE,2007:1-8.

备注/Memo

备注/Memo:
收稿日期:2013-01-22 修回日期:2013-02-27
基金项目:国家自然科学基金(61101197; 61103058; 61272220); 江苏省自然科学基金(BK2012399)
作者简介:郭东岩(1986-),男,博士生,主要研究方向:视觉显著性,图像分割,道路检测,E-mail.:dongyan.guo@163.com; 通讯作者:赵春霞(1964-),女,博士,教授,博士生导师,主要研究方向:智能机器人,图形图像技术,E-mail:zhaochunxia@126.com。
引文格式:郭东岩,赵春霞,李军侠,等.结合图像显著与灰度不一致性的目标自动提取[J].南京理工大学学报,2014,38(5):701-706.
投稿网址:http://zrxuebao.njust.edu.cn
更新日期/Last Update: 2014-10-31