Nonlinear interactive source-filter models for speech

2016-03-01
KOÇ, Turgay
Çiloğlu, Tolga
The linear source-filter model of speech production assumes that the source of the speech sounds is independent of the filter. However, acoustic simulations based on the physical speech production models show that when the fundamental frequency of the source harmonics approaches the first formant of the vocal tract filter, the filter has significant effects on the source due to the nonlinear coupling between them. In this study, two interactive system models are proposed under the quasi steady Bernoulli flow and linear vocal tract assumptions. An algorithm is developed to estimate the model parameters. Glottal flow and the linear vocal tract parameters are found by conventional methods. Rosenberg model is used to synthesize the glottal waveform. A recursive optimization method is proposed to find the parameters of the interactive model. Finally, glottal flow produced by the nonlinear interactive system is computed. The experimental results show that the interactive system model produces fine details of glottal flow source accurately.
COMPUTER SPEECH AND LANGUAGE

Suggestions

Nonlinear interactive source-filter model for voiced speech
Koç, Turgay; Çiloğlu, Tolga; Department of Electrical and Electronics Engineering (2012)
The linear source-filter model (LSFM) has been used as a primary model for speech processing since 1960 when G. Fant presented acoustic speech production theory. It assumes that the source of voiced speech sounds, glottal flow, is independent of the filter, vocal tract. However, acoustic simulations based on the physical speech production models show that, especially when the fundamental frequency (F0) of source harmonics approaches to the first formant frequency (F1) of vocal tract filter, the filter has s...
KNOWLEDGE-BASED SPEECH SYNTHESIS BY CONCATENATION OF PHONEME SAMPLES
OZUM, IY; Bulut, Mehmet Mete (1994-04-14)
In this work a speech synthesis system is implemented. The system uses concatenation of phoneme waveforms as the method of synthesis. These waveforms are generated by sampling the speech of a human speaker and then separating it into its phonemes. These phoneme samples are stored in the hard disk to be used in the synthesis. Then the text to be read is separated into its syllables and each syllable is synthesized by concatenating the phoneme samples. This method is facilitated by the structure of the Turkis...
Localization Uncertainty in Time-Amplitude Stereophonic Reproduction
De Sena, Enzo; Cvetkovic, Zoran; Hacıhabiboğlu, Hüseyin; Moonen, Marc; van Waterschoot, Toon (Institute of Electrical and Electronics Engineers (IEEE), 2020-01-01)
This article studies the effects of inter-channel time and level differences in stereophonic reproduction on perceived localization uncertainty, which is defined as how difficult it is for a listener to tell where a sound source is located. Towards this end, a computational model of localization uncertainty is proposed first. The model calculates inter-aural time and level difference cues, and compares them to those associated to free-field point-like sources. The comparison is carried out using a particula...
Content-Based Classification and Segmentation of Mixed-Type Audio by Using MPEG-7 Features
Dogan, Ebru; SERT, MUSTAFA; Yazicit, Adnan (2009-07-25)
This paper describes the development of a generated solution for classification and segmentation of broadcast news audio, A sound stream is segmented by classifying each sub-segment into silence, pure speech, music, environmental sound, speech over music, and speech over environmental sound classes in multiple steps. Support Vector Machines and Hidden Markov Models are employed for classification and these models are trained by using different sets of MPEG-7 features. A series of tests was conducted on hand...
Spectral modification for context-free voice conversion using MELP speech coding framework
Salor, O; Demirekler, Mübeccel (2004-10-22)
In this work, we have focused on spectral modification of speech for voice con version from one speaker to another. Speech conversion aims to modify the speech of one speaker such that the modified speech sounds as if spoken by another speaker. MELP (Mixed Excitation Linear Prediction) speech coding algorithm has been used as speech analysis and synthesis framework. Using a 230-sentence triphone balanced database of the two speakers, a mapping between the 4-stage vector quantization indexes for line spectra...
Citation Formats
T. KOÇ and T. Çiloğlu, “Nonlinear interactive source-filter models for speech,” COMPUTER SPEECH AND LANGUAGE, pp. 365–394, 2016, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/34371.