|Table of Contents|

Speaker recognition system based on fusion of cochlear filter cepstral coefficients and Teager energy operator phase(PDF)

《南京理工大学学报》(自然科学版)[ISSN:1005-9830/CN:32-1397/N]

Issue:
2018年01期
Page:
83-
Research Field:
Publishing date:

Info

Title:
Speaker recognition system based on fusion of cochlear filter cepstral coefficients and Teager energy operator phase
Author(s):
Mao ZhengchongWang Junjun
Key Laboratory of Advanced Process Control for Light Industry,Ministry of Education, Jiangnan University,Wuxi 214122,China
Keywords:
energy operator cochlear filter cepstral coefficient Gaussian mixture model-universal background model speaker recognition
PACS:
TN912
DOI:
10.14177/j.cnki.32-1397n.2018.42.01.012
Abstract:
In order to improve the performance of speaker recognition system,this paper proposes a method of compensating auditory cepstrum features by using phase features based on traditional features. In this method,Teager energy operator(TEO)can truly reflect the model of the eddy current nonlinearity caused by the airflow in the channel system. The Hilbert transform is used to derive the instantaneous phase information of the analyzing signal from TEO. The fusion characteristic parameters are obtained by combining with cochlear filter cepstral coefficients(CFCC). It realizes the compensation of characteristic parameters and improves the recognition rate of speaker recognition system. The NIST-2002 speakers recognition evaluation(SRE)database is used to experiment with the Gaussian mixture model-universal background model(GMM-UBM)speaker recognition system. The experimental results show that the combination of the TEO phase and CFCC is better than the CFCC alone,and its recognition accuracy is improved by 8.32% and 3.15%,respectively,compared with the existing CFCC characteristics and linear prediction Meyer frequency cepstral coefficient(LPMFCC). This indicates that the TEO phase contains the information that is complementary to the CFCC feature and has a high recognition rate.

References:

[1] 李燕萍,唐振民,丁辉,等. 一种适于说话人辨认的自适应频率尺度变换[J]. 南京理工大学学报,2010,34(2):182-186. Li Yanping,Tang Zhenmin,Ding Hui,et al. Adaptive frequency transform for speaker identification[J]. Journal of Nanjing University of Science and Technology,2010,34(2):182-186. [2]Zhang Xueying,Guo Yueling,Hou Xuemei. A speech recognition method of isolated words based on modified LPC cepctrum[C]//IEEE International Conference on Granular Computing. Fremont,USA:IEEE Press,2007:481-485. [3]Hosseinzadeh D,Krishnan S. Combining vocal source and MFCC features for enhanced speaker recognition performance using GMMs[C]//IEEE Workshop on Multimedia Signal Processing. Chania Crete,Greece:IEEE Press,2007:365-368. [4]Qi Li,Yan Huang. An auditory-based feature extraction algorithm for robust speaker identification under mismatched conditions[J]. IEEE Transactions on Audio Speech and Language Processing,2011,19(6):1791-1801. [5]Senoussaoui M,Dehak N,Kenny P,et al. First attempt of Boltzmann machines for speaker recognition[C]//Odyssy2012:The Speaker and Language Recognition Workshop. Singapore:ISCA Press,2012. [6]陈丽萍,王尔玉,戴礼荣,等. 基于深层置信网络的说话人信息提取方法[J]. 模式识别与人工智能,2013,26(12):1089-1095. Chen Liping,Wang Eryu,Dai Lirong,et al. Deep belief network based speaker information extraction method[J]. Pattern Recognition and Artificial Intelligent,2013,26(12):1089-1095. [7]Ehsan V,Xin Lei,Erik M,et al. Deep neural netework for small footprint text-dependent speaker verification[C]//IEEE International Conferece on Acoustic Speech and Signal Processing. Florence,Italy:IEEE Press,2014. [8]Qi Wang,Joseph J. From maxout to channel-out:encoding information on sparse pathways[J]. Artificial Neural Networks and Machine Learning,2014,8681(3):273-280. [9]秦楚雄,张连海. 基于DNN的低资源语音识别特征提取技术[J]. 自动化学报,2017,43(7):1208-1219. Qin Chuxiong,Zhang Lianhai. Deep neural network based feature extraction for low-resource speech recognition[J]. Acta Automatica Sinica,2017,43(7):1208-1219. [10]张涛涛,陈丽萍,蒋兵,等. 采用深度神经网络的说话人特征提取方法[J]. 小型微型计算机系统,2017,38(1):142-146. Zhang Taotao,Chen Liping,Jiang Bing,et al. Novel method for speaker feature extraction using deep neural network[J]. Journal of Chinese Computer Systems,2017,38(1):142-146. [11]Mahadeva P S,Cheedella S,Yegnanarayana B. Extraction of speaker-specific excitation from linear prediction residual of speech[J]. Speech Communication,2006,48(10):1243-1261. [12]Zheng Nengheng,Lee T,Ching P. Integration of complementary acoustic features for speaker recognition[J]. IEEE Signal Processing Letters,2007,14(3):181-184. [13]李壮辉. 基音特征融合高斯混合模型的说话人识别研究[J]. 测控技术,2014,33(6):28-31. Li Zhuanghui. Gaussian mixture model of a new pitch features-integration for speaker recognition[J]. Measurement and Control Technology,2014,33(6):28-31. [14]毛燕湖,曾以成,陈雨莺,等. 说话人识别的特征组合方法[J]. 计算计应用,2015,35(2):242-244. Mao Yanhu,Zeng Yicheng,Chen Yuying,et al. Feature combination method in speaker recognition[J]. Journal of Computer Applications,2015,35(2):242-244. [15]Patil H A,Parhi K K. Development of TEO phase for speaker recognition[C]//Signal Processing and Communicatins. Bangalore,India:IEEE Press,2010:1-5. [16]Teager H. Some observations on oral air flow during phonation[J]. IEEE Transactions on Acoustics,Speech and Signal Processing,1980,28(5):599-601. [17]高慧,苏广川,陈善广. 基于Teager能量算子(TEO)非线性特征的语音情绪识别[J]. 航空医学与医学工程,2005,18(6):427-431. Gao Hui,Su Guangchuan,Chen Shanguang. Emotion recognition of mandarin speech using nonlinear features based on Teager energy operator[J]. Space Medicine and Medical Engineering,2005,18(6):427-431. [18]Kaiser J F. On a simple algorithm to calculate the energy of a signal[C]//International Conference on Acoustic,Speech and Signal Processing. Albuquerque,New Mexico,USA:IEEE Press,1990:381-384. [19]Naylor P A,Kounoudes A,Gudnason J,et al. Estimation of glottal closure instants in voiced speech using the DYPSA algorithm[J]. IEEE Transactions on Audio,Speech and Language Processing,2007,15(1):34-43. [20]刘庆华. 基于声门闭合瞬间检测的时延算法研究[J]. 电声技术,2006,30(9):45-49,53. Liu Qinghua. Time delay estimation based on the estimation of glottal closure instant[J]. Aduio Engineering,2006,30(9):45-49,53. [21]Murty K S R,Yegnanarayana B. Epoch extraction from speech signals[J]. IEEE Transactions on Audio,Speech and Language Processing,2008,16(8):1602-1613.

Memo

Memo:
-
Last Update: 2018-02-28