Abstract
Conventional algorithms for blind source separation do not necessarily work well for real-world data. One of the conceivable reasons is that, in actual applications, the mixing matrix is often almost singular at some part of frequency range and it can cause a certain computational instability. This paper proposes a new algorithm to overcome this problem. The algorithm is based on the minimal distortion principle proposed by the authors and in addition incorporates a kind of regularization term into it, which has a role to suppress the gain of the separator for the frequency range at which the mixing matrix is almost singular.