For now, people featured in deepfake porn have few options
Jan 29, 2024, 4:00 PM | Updated: 6:17 pm
(Rick Bowmer/Associated Press)
SALT LAKE CITY — Your phone is buzzing away — beeping and chiming and chirping. It’s a lot more than usual. Something is up.
You look at the messages, and it’s your friends telling you what they’re seeing on social media. It’s a doctored image or video clip of you, enhanced by artificial intelligence, depicted in pornography.
This is called a deepfake, and it’s happening more and more.
This week it happened to Taylor Swift. And even though she’s a superstar – like every other woman, child, and man this has happened to, she’s basically helpless.
“Until Congress or the states decide to decriminalize this type of conduct,” said KSL Legal Analyst Greg Skordas.
The short story is that there aren’t many legal protections for the people depicted in deep fakes. There is no federal law that addresses it. However a USA Today investigation found 10 states have passed laws that ban this type of pornography.
Why is it so hard to punish those behind deepfake porn?
Like other negatives that thrive in digital spaces, deep fake pornography is created underneath multiple layers of secrecy. It’s difficult to know if a deep fake was created in America, Armenia, or Argentina. Not knowing where to focus makes law enforcement difficult.
“How are you going to extradite them, “Skordas said. “How do you identify them? … Unless you could identify a perpetrator unless it crosses that bridge into criminal conduct, there’s not much that can be done in this day and age.”
What about the platforms that host deepfakes?
Doctored imagery and videos don’t exist in a vacuum, their popularity is magnified by social media platforms like X, Instagram, TikTok, and PornHub.
“They can be held civilly liable,” Skordas said, “when it regards children. And if someone is getting paid or receiving compensation for something that isn’t true, and they know it’s not true, they can be held civilly liable.”
So as it stands right now, if a media platform knows that an image or video is fake, the platform can be prosecuted. The same is true if someone uses the image of a minor, then the platform can be civilly prosecuted.
There are a few things the platforms are doing to try and curb the spread of deepfakes. Bloomberg reports that TikTok has banned “AI-created portrayals of private citizens and minors.”
The folks over at Meta are reportedly reaching out to lawmakers. They’d like to see laws that force AI watermarking, a tag that identifies content as being generated by artificial intelligence.
How about the AI programs used to create deepfake porn?
Deepfakes don’t just appear. For them to “come to life” a program using artificial intelligence is required. Open AI is one example.
Those who create AI programs are reportedly trying to reel in deep fakes. Open AI is trying to get rid of explicit content and prevent users from using images of celebrities. Other AI companies are tracking and blocking keywords.
And how about the federal government?
Will creating deepfake porn ever get someone in trouble with the criminal courts in the U.S?
There is some activity brewing. According to ARSTechnica, HR3106 – Preventing Deepfakes of Intimate Images Act would include a $150,000 fine and prison term of up to ten years for those who share deepfake porn without someone’s consent.
Related reading: