[ad_1]

French DJ and music producer David Guetta has discovered AI tools, and thought it would be fun to use an unauthorized deepfake of Eminem’s voice to rev up a huge crowd at a live show. It looks like it worked, but it raises legal and ethical questions.

In a tweet last week, Guetta shows the live performance moment in question, then explains how he did it, presumably using something like ChatGPT to write the lyrics, and then another service like Uberduck or FakeYou to turn the lyrics into a soundbite.

“Eminem bro,” says Guetta. “There’s something I made as a joke, and it worked so good I could not believe it! I discovered those websites that are about AI. Basically, you can write lyrics in the style of any artist you like. So I typed ‘write a verse in the style of Eminem about Future Rave,’ and I went to another AI website that can recreate the voice. I put the text in that, and I played the record, and people went nuts.”

He then goes on to say “obviously I won’t release this commercially” – although as one of the world’s most successful DJs, with more than 10 million album sales, more than 10 billion streams, and well over US$100 million in music-related income to his name, it’s a little bizarre that Guetta doesn’t see his use of another man’s faked-up voice at a live concert in front of thousands of paying customers as falling into a commercial use case. Especially since it doesn’t seem he told people at the show it wasn’t really Eminem endorsing his “future rave” movement.

Eminem, as far as we can tell, hasn’t made any response yet. And maybe he won’t bother – the rap snippet created here does happen to be particularly cringeworthy, with the phony Eminem saying simply “this is the future rave sound / I’m getting awesome and underground.” The two may well know each other and it all seems to be well-intentioned and in good spirits.

But however Eminem and his legal team feel about this, it’s emblematic of a serious problem that’s starting to affect regular folk as well as celebrity entertainers.

The law hasn’t yet caught up with the explosive rate of progress in the AI and deepfake spaces. Most of the laws that do exist, according to the New York Times, are focused on deepfake pornography and the use of this tech to impersonate politicians. China has just enacted laws requiring deepfakes both to be clearly marked with watermarks or digital signatures, and to have the express consent of the subject. In the United States, a bill requiring watermarks or identifying labels on deepfakes is yet to come to a vote.

The audio sample here was not created by Eminem, obviously – but it was created by an algorithm that has ingested hours and hours of Eminem’s actual performances; without all that copyrighted data to “train” on, it wouldn’t have been able to generate the fake. If Guetta had used a real Eminem sample, he would’ve owed him performance royalties.

But the law doesn’t appear to have a way to treat this kind of thing at the moment; it appears Guetta has broken no laws, which means congratulations, any young music producer with a laptop, a DAW and an internet connection, you can now feature a phony Eminem on your tracks. You can make him – or any other celebrity whose voice has been cloned – say whatever you like. You can make him endorse products he’s never heard of, or espouse views he very much doesn’t hold, and you don’t have to tell anyone it’s a fake. Apparently you can do so currently with the full protection of current US copyright law.

“US Copyright legislation enables deep fake content creation, considering it as fair use,” writes Akhil Satheesh in a 2022 blog piece for the University of Richmond’s Journal of Law and Technology. “However, as these laws do not discriminate based on the intent of the creator, it allows the categorisation of deep fakes produced with even mala fide intent, as parodies and even protects as the same.”

Source: David Guetta (Twitter)



[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *