Are these the Hidden Deepfakes of the Anthony Bourdain Movie?
When Roadrunner, a documentary about the late TV chef and traveler Anthony Bourdain, which opened in theaters last month, its director Morgan Neville, found promotional talks with unusual revelations for a documentary. Some of the words that viewers heard Bourdain say in the film were faked by artificial intelligence software used to mimic the star’s voice.
Accusations from Bourdain fans that Neville acted in a bad way quickly dominated the film’s coverage. Despite that attention, how much of the fake Bourdain sounded in the two -hour movie, and what it said, it’s still unclear – until now.
In an interview his film turned out badly, Neville SAY The New Yorker that he had created three fake clips of Bourdain with the permission of his property, all from words written or spoken by the chef but that could not be used as audio. He only revealed one, an email that Bourdain could “read” at movie trailer, but boasts that the other two clips go unnoticed. “If you watch the movie,” The New Yorker quoted Oscar winner Neville as saying, “you probably don’t know what the other lines say in AI, and you don’t know.”
Pindrop audio experts, a startup that can help banks and others fight phone fraud, they think they know. If the company’s analysis is correct, Bourdain’s deep controversy is rooted in less than 50 seconds of audio in the 118 -minute film.
Pindrop’s analysis flagged the email quote Neville revealed and also a clip early in the film apparently taken from an essay Bourdain wrote about Vietnam titled “The Hungry American,” which was collected in his 2008 book, The Bad Bits. It also focuses on the audio in the middle of the film where the chef notices that many chefs and writers have a “relentless habit of consuming something good.” The same sentences appear in a conversation with Bourdain with food site First We Feast on the occasion of his 60th birthday in 2016, two years to a month before he died by suicide.
All three clips sound recognizable as Bourdain. Upon listening closely, however, they appear to have synthetic language signatures, such as uncommon prosody and fricatives such as the “s” and “f” sounds. A Reddit user independently flagged same three Pindrop clips, writing that they can be easily heard watching the film a second time. The film’s distributor, Features Focus, did not respond to requests for comment; Neville’s production company declined to comment.
If Neville predicts that his use will be made by AI media, it is sometimes called a term deepfakes, unnoticed, he may have preferred the difficulty of his own testimony. Perhaps he didn’t expect the controversy or attention to be garnered by his use of the technique from fans and audio experts. When the furor reached the ears of Pindrop researchers, they saw the perfect test case for the software they had built to detect audio deepfakes; they set it up for use when the film debuts on streaming services in the first month. “We’re always looking for ways to test our systems, especially in very real conditions-it’s a new way to validate our technology,” said Collin Davis, chief technology officer at Pindrop.
Pindrop’s consequences may have solved the mystery of Neville’s missing deepfakes, but the episode presents controversies in the future as deepfakes become more sophisticated and accessible for creative and destructive projects.