Many-to-Many Voice Conversion based Feature Disentanglement using Variational Autoencoder
Voice conversion is a challenging task which transforms the voice characteristics of a source speaker to a target speaker without changing linguistic content.
Recently, there have been many works on many-to-many Voice Conversion (VC) based on Variational Autoencoder (VAEs) achieving good results, however, these methods
lack the ability to disentangle speaker identity and linguistic content to achieve good performance on unseen speaker scenarios.
In this paper, we propose a new method based on feature disentanglement to tackle many to many voice conversion.
The method has the capability to disentangle speaker identity and linguistic content from utterances, it can convert from many source speakers
to many target speakers with a single autoencoder network. Moreover, it naturally deals with the unseen target speaker scenarios.
We perform both objective and subjective evaluations to show the competitive performance of our proposed method compared with other state-of-the-art models in terms of naturalness and target speaker similarity.
This paper has been accepted for publication in the proceedings of the Interspeech 2021 conference.