Software could allow AI to mimic any human voice, potentially unleashing a whole new era of convincing fake news

Fake news is a major problem that is becoming more serious with each passing day. As dishonest mainstream media outlets – many of which are heavily involved in the creation and sharing of fake news – continue to operate with little to no oversight, the number of times fake news gets put out into the ether keeps growing steadily.

In case you were beginning to think that there’s no way things could get worse, think again. In the future, a simple piece of software could empower a shady individual, or perhaps even a corporation, to create fake recordings that mimic the voices of other people, for any reason that they deem necessary.

It could be done as a joke or to incite mass hysteria; there’s really no way to tell for sure at this point. What’s clear is that the technology to make such a task as trivial as possible is close to arriving. Are humans prepared to deal with the consequences?

In a recent piece written for Wired titled, “Fake news 2.0: AI will soon be able to mimic any human voice,” the author asserts that the biggest blow that AI will deal to humans will come in the form of the absolute destruction of the collective trust placed in anything that can be seen or heard, as far as the news is concerned. If you were feeling troubled now, the author says that current fears of fake news will simply pale in comparison to new tech that can fake the human voice.

There are currently many voice-related services that can be of use to you if you spend a lot of time connected to the Internet. For example, there are built-in voice assistants like Cortana, Siri, and Alexa from technology giants Microsoft, Apple, and Amazon that are supposed to help you make the most out of the devices or the services that you’re using. And then there are all the assorted voice enhancements for internet-of-things devices and social platforms that also promise to give you more of what you want to get from these specific companies or services.

Lurking in the back, behind all of these seemingly innovative products or services and additions to existing technology, is a world where any person or organization can take a voice snippet and make it their very own. This is the world of the Adobe product called Project Voco, which could well turn out to be the most dangerous piece of voice manipulation software ever created.

According to the Wired article, where it’s mentioned, it’s “essentially a Photoshop of soundwaves,” where waveforms are used in place of pixels in order to produce something that sounds natural to human ears. The idea Adobe has for its use goes like this: If enough of a person’s voice and speech pattern can exist in a recording, then that recording can be used to create limitless applications of the same voice. It could allow anyone to take the voice of most anyone else in the entire world and make them say anything at all, at will. It’s a thought for a scary future, indeed.

If this software works as it’s intended to, then “soon common citizens will be unable to distinguish between real voices and spoof ones,” claims the author. “If you have enough samples stored in your data library, then you can make anyone appear to say almost anything.”

Of course, in theory, it could be easy to combat this as long as you use context and a little bit of common sense. But if the practice becomes more common, that’s when it could become too much to handle. When that time comes, it will be important to know whom in the media you could trust, as well as whom you shouldn’t – under any circumstances.

Source : robotics.news

Leave a Reply

Your email address will not be published. Required fields are marked *