|Table of Contents|

Stable attribute reduction approach for fuzzy rough set(PDF)


Research Field:
Publishing date:


Stable attribute reduction approach for fuzzy rough set
Li Jingzheng1Yang Xibei12Wang Pingxin3Chen Xiangjian1
1.School of Computer Science,Jiangsu University of Science and Technology,Zhenjiang 212003,China; 2.School of Economics & Management,Nanjing University of Science and Technology,Nanjing 210094,China; 3.School of Mathematics and Physics,Jiangsu Unive
attribute reduction data perturbation fuzzy rough sets stability
Attribute reduction plays a core role in rough set theory.Presently,most of the results of such topic are based on the measurements such that classification performances,costs,uncertainties and so on.Those do not carefully take the fluctuations of reducts into account if data perturbations happen.To fill this gap,a heuristic framework for generating stable reduct is proposed.Firstly,multiple boundary sample sets are induced by multiple clusterings’ technique.Secondly,the fused significance for each attribute can be computed using the multiple significances of such attribute obtained in all boundary sample sets.Finally,the attribute with greatest fused significance is selected and then added into the pool set.The proposed algorithm is tested on several UCI data sets and the experimental results indicate that by comparing with traditional heuristic algorithms,this approach can not only effectively improve the time efficiency for computing reduct and the stability of the reduct,but also advance the classification stability based on the reduct.


[1] Pawlak Z.Rough sets-Theoretical aspects of reasoning about data[M].Dordrecht,Boston,London:Kluwer Academic Publishers,1991. [2]Pawlak Z.Rough sets[J].International Journal of Computer & Information Sciences,1982,11(5):341-356. [3]Dubois D,Prade H.Rough fuzzy sets and fuzzy rough sets[J].International Journal of General Systems,1990,17(2):191-209. [4]Jensen R,Shen Q.Fuzzy-rough sets assisted attribute selection[J].IEEE Transactions on Fuzzy Systems,2007,15(1):73-89. [5]Wang Xizhao,Tsang E C,Zhao Suyun,et al.Learning fuzzy rules from fuzzy samples based on rough set technique[J].Information Sciences,2007,177(20):4493-4514. [6]Wu Weizhi.Attribute reduction based on evidence theory in incomplete decision systems[J].Information Sciences,2008,178(5):1355-1371. [7]Yang Xibei,Qi Yunsong,Song Xiaoning,et al.Test cost sensitive multigranulation rough set:model and minimal cost selection[J].Information Sciences,2013,250(11):184-199. [8]Qian Yuhua,Liang Jiye,Pedrycz W,et al.Positive approximation:an accelerator for attribute reduction in rough set theory[J].Artificial Intelligence,2010,174(9-10):597-618. [9]Liu Xiaodong,Pedrycz W,Chai Tianyou,et al.The development of fuzzy rough sets with the use of structures and algebras of axiomatic fuzzy sets[J].IEEE Transactions on Knowledge & Data Engineering,2009,21(3):443-462. [10]Hu Qinghua,Yu Daren,Pedrycz W,et al.Kernelized fuzzy rough sets and their applications[J].IEEE Transactions on Knowledge & Data Engineering,2011,23(11):1649-1667. [11]王宇,杨志荣,杨习贝.决策粗糙集属性约简:一种局部视角方法[J].南京理工大学学报,2016,40(4):444-449. Wang Yu,Yang Zhirong,Yang Xibei.Local attribute reduction approach based on decision-theoretic rough set[J].Journal of Nanjing University of Science and Technology,2016,40(4):444-449. [12]黄颖,李芳芳.基于粗集理论的物流供应商选择研究[J].江苏科技大学学报(自然科学版),2008,22(6):67-71. Huang Ying,Li Fangfang.Research on logistics suppliers selection based on rough sets theory[J].Journal of Jiangsu University of Science and Technology(Natural Science Edition),2008,22(6):67-71. [13]王熙照,王婷婷,翟俊海.基于样例选取的属性约简算法[J].计算机研究与发展,2012,49(11):2305-2310. Wang Xizhao,Wang Tingting,Zhai Junhai.An attribute reduction algorithm based on instance selection[J].Journal of Computer Research and Development,2012,49(11):2305-2310. [14]Saeys Y,Abeel T,Peer Y V D.Robust feature selection using ensemble feature selection techniques[J].Lecture Notes in Computer Science,2008,5212:313-325. [15]Li Yun,Si J,Zhou Guojing,et al.FREL:a stable feature selection algorithm[J].IEEE Transactions on Neural Networks & Learning Systems,2014,26:1388-1402. [16]Wu Xindong,Kumar V,Quinlan J R,et al.Top 10 algorithms in data mining[J].Knowledge and Information Systems,2008,14(1):1-37. [17]钱晓东,曹阳.基于社区极大类发现的大数据并行聚类算法[J].南京理工大学学报,2016,40(1):117-123. Qian Xiaodong,Cao Yang.Large data parallel clustering algorithm based on discovery of maximal class in the community[J].Journal of Nanjing University of Science and Technology,2016,40(1):117-123. [18]刘芝怡,陈功.基于改进k-means算法的RFAT客户细分研究[J].南京理工大学学报,2014,38(4):531-536. Liu Zhiyi,Chen Gong.RFAT customer segmentation based on improved k-means algorithm[J].Journal of Nanjing University of Science and Technology,2014,38(4):531-536. [19]Ju Hengrong,Yang Xibei,Song Xiaoning.Dynamic updating multigranulation fuzzy rough set:approxima-tions and reducts[J].International Journal of Machine Learning & Cybernetics,2014,5(6):981-990. [20]杨习贝,徐苏平,戚湧,等.基于多特征空间的粗糙数据分析方法[J].江苏科技大学学报(自然科学版),2016,30(4):370-373. Yang Xibei,Xu Suping,Qi Yong,et al.Rough data analysis method based on multi-feature space[J].Journal of Jiangsu University of Science and Technology(Natural Science Edition),2016,30(4):370-373. [21]Hu Qinghua,Zhang Lei,Chen Degang,et al.Gaussian kernel based fuzzy rough sets:model,uncertainty measures and applications[J].International Journal of Approximate Reasoning,2010,51(4):453-471. [22]Rao C R,Wu Y.Linear model selection by cross-validation[J].Journal of Statistical Planning and Inference,2003,128(1):231-240. [23]杨春,殷绪成,赫红卫,等.基于差异性的分类器集成:有效性分析及优化集成[J].自动化学报,2014,40(4):660-674. Yang Chun,Yin Xucheng,He Hongwei,et al.Classifier ensemble based on diversity:effectiveness analysis and optimization integration[J].Acta Automatica Sinica,2014,40(4):660-674. [24]Kuncheva L I,Whitaker C J.Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy[J].Machine Learning,2003,51(2):181-207. [25]Yule G U.On the association of attributes in statistics[J].Philosophical Transactions of the Royal Society A:Mathematical,Physical & Engineering Sciences,1900,194:257-319. [26]Li Shengqiao,Harner E J,Adjeroh D A.Random KNN feature selection-a fast and stable alternative to random forests[J].BMC Bioinformatics,2011,12(1):450-459. [27]胡清华,于达仁,谢宗霞.基于邻域粒化和粗糙逼近的数值属性约简[J].软件学报,2008,19(3):640-649. Hu Qinghua,Yu Daren,Xie Zongxia,et al.Numerical attribute reduction based on neighborhood granulation and rough approximation[J].Journal of Software,2008,19(3):640-649. [28]Zhou Bing.Multi-class decision-theoretic rough sets[J].International Journal of Approximate Reasoning,2014,55(1):211-224. [29]Dou Huili,Yang Xibei,Song Xiaoning,et al.Decision-theoretic rough set:a multicost strategy[J].Knowledge-Based Systems,2016,91:71-83. [30]Hu Qinghua,Zhang Lei,An Shuang,et al.On robust fuzzy rough set models[J].IEEE Transactions on Fuzzy Systems,2012,20(4):636-651. [31]Zhang Minling,Zhou Zhihua.A review on multi-label learning algorithms[J].IEEE Transactions on Knowledge & Data Engineering,2014,26(8):1819-1837. [32]Xu Suping,Yang Xibei,Yu Hualong,et al.Multi-label learning with label-specific feature reduction[J].Knowledge-Based Systems,2016,104:52-61.


Last Update: 2018-02-28