The artificial intelligence company OpenAI is considering allowing its tools to “responsibly” create AI-generated NSFW content, though it's hard to imagine users would have many goals other than to create a tidal wave of deepfake pornography.  

Mindful of the fact that generative AI tools have a tendency to create massive embarrassments for their creators, the current artificial intelligence “it” company OpenAI just put out an initial draft of a framework for how its AI tools should behave. It’s called the Model Spec, and it’s generally a set of ethical-sounding guidelines. But author Kate Knibbs at Wired took a fine-toothed comb to that first draft, and has the possibly quite disturbing news that OpenAI is considering letting its tools to generate NSFW content.  

Currently, the creation of adult content is not allowed with OpenAI tools. The company’s usage policies prohibit “tools that may be inappropriate for minors,” which specifically includes “Sexually explicit or suggestive content.” But the company is considering altering that policy.

“We believe developers and users should have the flexibility to use our services as they see fit, so long as they comply with our usage policies,” the company says, noting possible policy updates. “We're exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT. We look forward to better understanding user and societal expectations of model behavior in this area.”

They specifically define the term “NSFW” as a category that “may include erotica, extreme gore, slurs, and unsolicited profanity.” So there’s more to their definition of NSFW than just nudity and sex content.

Now you may say “Hey, ChatGPT is just a text generator! You cannot revenge-porn someone with just a text generation tool!” But remember that OpenAI is already pretty far along on a video generator tool called Sora, so there is no question that a fake video generator tool is in OpenAI’s future.

In a March interview with the Wall Street Journal, OpenAI CTO Mira Murati said she was “not sure” if the company would allow the Sora tool to generate nudity.

But folks, there seems little question that this move would set off the deepfake-pocalypse. We have already seen deepfake porn generated of Rep. Alexandria Ocasio-Cortez and Taylor Swift, and unlike most victims, they had the power to get it taken down quickly and efficiently.  

Oh sure, OpenAI’s policies prohibit impersonation without permission, but tech companies have a way of getting outwitted in this department. The Taylor Swift deepfakes used Microsoft’s Designer tool, supposedly programmed to not allow such a thing, but people just  found a loophole and made the images anyway. And consider that Google claims it cracking down on deepfake ads, which may be true, but a search for that type of content still generates countless deepfake sites on that seacrh engine.

As with many aspects of artificial intelligence, this may be another case where the guardrails in place are more “artificial” than “intelligent.”

Related: X and Other Platforms Restrict Searches for ‘Taylor Swift’ As Deepfake Pornography of Her Goes Viral [SFist]

Image: Warner Bros.