Amazon to introduce a new Alexa feature to mimic anyone’s voice

Amazon develops Alexa to mimic anyone's voice- New Tech Fears

Amazon has recently unveiled that it’s working on a new mimicry feature of its voice assistant Alexa that will be able to mimic any human’s voice, dead or alive, using less than a minute of recorded audio.

The release date is still unknown.

During the company’s Re: Mars conference Wednesday, June 22, 2022, in Las Vegas, Amazon’s senior vice president, and head scientist Rohit Prasad demonstrated the feature using a video of a child asking an Amazon device, “Alexa, can Grandma finish reading me The Wizard of Oz?”

Prasad said that the goal is to “make the memories last” after “so many of us have lost someone we love” during the pandemic. “While A.I. can’t eliminate that pain of loss, it can make the memories last,” Prasad added.

However, Amazon’s new tech seems equally problematic and raises fears; since it will be used for deepfakes, criminal scams, or other outrageous purposes.

That’s a nice thought; presenter Rohit Prasad, SVP and Head Scientist, Alexa AI, notes that “we’re living in the golden era of AI.” But the technology presents various potential issues, particularly with scammers and deep fakes. Also, there doesn’t seem to be any protection against someone using a voice without consent.

The new voice mimicry technology

Prasad’s presentation was based on Amazon’s exploratory text-to-speech (TTS) research, which the company has been exploring using recent technological advancements. “We’ve learned to produce a high-quality voice with far fewer data versus recording in a professional studio,” according to an Amazon spokesperson.

The voice mimicry feature is currently in development, and the company did not disclose when it intends to roll it out to the public.

Prasad said that the new voice speech pattern technology will need only “less than a minute of recorded audio” to produce a high-quality voice, which is possible “by framing the problem as a voice conversion task and not a speech generation path.”

The new technology might one day become ubiquitous in shoppers’ lives, and Prasad noted it could be used to build trust between users and their Amazon devices.

“One thing that surprised me the most about Alexa is the companionship relationship we have with it. In this companionship role, human attributes of empathy and effect are key for building trust,” he said.

The new voice mimicry feature conjures fears

While the new mimicry feature may be innovative, it conjures fears in some—including in companies that work in the field—that it could be used for outrageous purposes.

  • Microsoft, which also created voice mimicry technology to help people with impaired speech, restricted which segments of its business could use the technology on fears it would enable deep political fakes. Microsoft’s chief responsible A.I. officer, Natasha Crampton, told Reuters.
  • The new feature is also stoking worries online:

“Remember when we told you deepfakes would increase the mistrust, alienation, & epistemic crisis already underway in this culture? Yeah, that. That times a LOT,” said Twitter user @wolven, whose bio identifies him as Damien P. Williams, a Ph.D. researcher in algorithms, values, and bias.

  • Some fear the easy way scammers could use the technology for their benefit.
  • Mike Butcher, the editor of TechCrunch’s ClimateTech, noted, “Alexa mimicking a dead family member sounds like a recipe for serious psychological damage.”
  • Others advised people to stop buying the device altogether.