- The VALL-E tool is so powerful that it can even simulate voices from a short sample, and there is no other AI model which can sound as natural as that.
- The concern is that the more AI improves, the better the audio deepfakes, and then there can be a problem.
Microsoft Corp., on January 10th provided a peek at a text-to-speech Artificial Intelligence (AI) tool, VALL-E. It can simulate a voice after listening to an audio sample for just three seconds.
The company stated that the tool can retain the speaker’s emotional tone for the rest of the message while simulating the acoustics of the room from where it first heard the voice. It is so powerful that it can even do it from a short sample, and there is no other AI model which can sound as natural as that.
Voice simulation is not new anymore. In the past, there have been tools that are able to simulate human voices, but not for the best of reasons. The concern is that the more AI improves, the better the audio deepfakes, and then there can be a problem.
Currently, there is no real review of the tool as Microsoft hasn’t released it to the public, although it has provided some completed work samples. It will be great to see and use a tool that needs the mimicry of just three seconds, and the copied voice will go on to speak for any length of time.
If it’s as good as Microsoft says it is and can sound as human, with emotions and all, you can know why Microsoft wants to invest in the AI that has taken the world by storm, the very popular ChatGPT. If both VALL-E and ChatGPT are combined, people asking questions on the phone at call centers will not be able to differentiate a human from a robot. There is a possibility that this partnership could create something similar to a podcast but without a real guest.
Yes. If given in the hands of the wrong people, a powerful tool like this can be used for spreading wrong information, mimicking the voices of politicians, journalists, and celebrities.
Microsoft said in its paper, “Since VALL-E could synthesize speech that maintains speaker identity, it may carry potential risks in misuse of the model, such as spoofing voice identification or impersonating a specific speaker. To mitigate such risks, it is possible to build a detection model to discriminate whether an audio clip was synthesized by VALL-E. We will also put Microsoft AI Principles into practice when further developing the models.”