The introduction of a new Alexa feature that allows it to speak using the voices of the dead has brought about widespread shock and horror.
The tool has been compared with dystopian sci-fi and brought about concerns about the way that technology could be used to mislead people as well as break usual ethical concerns.
The feature allows Alexa smart speakers to simulate the voice of other people – particularly, Amazon said, those who have died – and allow the AI system to speak using it.
What is the Alexa feature?
It is intended, almost literally, to channel the dead. It can take a short snippet of recorded voice and then use that as Alexa’s voice, allowing the AI system to speak as someone who is no longer alive.
In the example used by Amazon at the demonstration, Alexa read a book to a child in the voice of their dead grandmother.
It noted that the feature could be useful to help people hold on to the memories of dead people, and suggested that such a tool could be especially useful during a pandemic that has seen many people lose friends and family.
The feature is able to do all of that with just a one minute snippet of audio, Amazon said. That would presumably mean that the feature could be used even for people who hadn’t specifically recorded their voice for the system before they died.
Is it out now?
No. The tool was demonstrated as part of a concept by Amazon, though it does seem to actually be real and available to Amazon itself.
It gave no indication of when it might arrive in everyone else’s Alexa devices. And after the widespread shocked reaction, it might never arrive at all.
How widespread is this technology?
While the Alexa feature might be the first time it has been rolled out in a public-facing, consumer device, technology that uses AI to recreate people’s voices has existed for some time.
It has gained public notice particularly when it has been used in films. Recently, for instance, such technology was used in an Anthony Bourdain documentary to recreate the late chef’s voice as he read letters, and to allow Val Kilmer to speak in Top Gun: Maverick as he did before his voice was affected by throat cancer.
But it is used in less obvious, more daily ways, too. Audio editing software can use people’s voice to swap in words if a person misspeaks, for instance.
That usage – and the ease with which it can be done – has led to concerns that it could be easily used to create “audio deepfakes”, of people seemingly say things they never actually said. That worry has led to companies including Microsoft limiting the access to such technology, in the hope of ensuring it was used responsibly.
Why are people comparing it to Black Mirror?
As well as being the kind of dystopian tech invention that would be shown on the show, the Alexa feature bears an even closer resemblance to specific parts of the TV series. In the episode ‘Be Right Back’, first aired in 2013, a character is shown becoming friends with an android that is built to copy the behaviour of her dead partner.
Just as Alexa is able to create a person’s voice out of just a minute of audio, the android is built out of text conversations and social media posts. In both cases, real and fictional, artificial intelligence is able to use a relatively limited amount of data to construct a detailed – if incomplete – version of the person.
Reaction to the Black Mirror episode was mixed in the same way as it has been to the Alexa feature. Critics noted that it was dystopian but also touching, perhaps in the same way some might see the Alexa feature.
And the Black Mirror episode has already gone on to inspire real technology – a couple of years later, a technologist built a version of her dead friend, citing the episode as inspiration. As might be expected, that proved controversial: some found the chatbot to be remarkably similar to their lost friend, but others complained that its creator Eugenia Kuyda had missed the point of the episode.