ID 6286
File
Authors
Ichikawa, Motohiro Graduate School of Engineering
Keywords
speech separation
auditory scene analysis
single input
sequential processing
modified discrete Fourier transform
Abstract
Speech separation based on auditory scene analysis (ASA) has been widely studied. A computational ASA (CASA) model, in which a mixed signal is sequentially decomposed into frequency signals, has been also proposed. Four feature types of ASA are extracted from the decomposed frequency signals, and the decomposed frequency signals are regrouped by examining the characteristics of the extracted features. Finally separated speeches are obtained. In this study, the CASA model is improved and pieced out, and the separation performance is examined via a computer simulation.
Publisher
IEEE
Content Type
Conference Paper
Journal Title
2018 IEEE 14th International Conference on Signal Processing (ICSP)
Current Journal Title
2018 IEEE 14th International Conference on Signal Processing (ICSP)
Start Page
108
End Page
112
Published Date
2018-09
Text Version
Author
Rights
© © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Citation
M. Ichikawa, N. Sasaoka, and I. Nakanishi. A Single Input Model for Sequential Processing of Speech Separation. Proc. of 2018 IEEE International Conference on Information Communication and Signal Processing (ICSP2018), pp. 108-112, Sep. 2018
Department
Faculty of Engineering/Graduate School of Engineering
Language
English