The scene is a familiar one to any ’90s sitcom fan: Jerry Seinfeld in front of a microphone, riffing to a crowd during a stand-up routine.

But it isn’t Seinfeld. The figure onstage, “Larry Feinberg,” is a pixelated approximation of the famous comedian and makes sudden, jerky motions and speaks in a digital monotone. An electronic beat plays in the background. And the show never, ever stops.

This is the oddity — and, to some, the eclectic charm — of “Nothing, Forever,” an art project made up of a never-ending stream of episodes parodying “Seinfeld” that uses artificial intelligence program GPT-3, the strikingly responsive text-generating program that powers the viral AI tool ChatGPT.

“Nothing, Forever” began broadcasting on the live-streaming platform Twitch in December and attracted a following online for its ambitious and surreal application of generative AI, which can theoretically crank out strange imitations of America’s beloved sitcom forever.

Or at least until early Monday morning, when the “Nothing, Forever” artificial comedian made several transphobic comments in a stand-up routine. Feinberg, proposing ideas for future comedy routines, said that being transgender was “a mental illness” and “ruining the fabric of society” and that “all liberals are secretly gay and want to impose their will on everyone.”

“But no one is laughing, so I’m going to stop,” Feinberg said after making the offending comments. “… Where’d everybody go?”

Shortly afterward, Twitch suspended the channel for 14 days for violating the platform’s community guidelines.

“We are super embarrassed, and … the generative content created in no way reflects the values or opinions of our staff,” Skyler Hartle, one of the co-creators of “Nothing, Forever,” wrote in an email to The Washington Post. “We very much regret this happened and hope to be back on the air soon, with all the appropriate safeguards in place.”

AI chatbot mimics anyone in history — but gets a lot wrong, experts say

Twitch declined to comment on the suspension, citing privacy reasons, but included an excerpt from the platform’s community guidelines, which prohibits discrimination by gender and gender identity.

Hartle and fellow co-creator Brian Habersberger created “Nothing, Forever” about four years ago after discussing the idea of an “always-on” show inspired by ’90s sitcoms, Hartle wrote.

The ChatGPT service — which, like “Nothing, Forever,” is powered by the GPT3 program — can identify patterns in swaths of human writing to generate new responses to requests and has proved incredibly versatile, if error-prone and not quite humanlike, in responding to a range of tasks and creative prompts.

Clips from “Nothing, Forever” show the program’s rough edges. Awkward silences follow punchlines and laugh tracks occasionally trigger at the wrong times. But the show’s surreal quality earned itself fans and amused observers. In one instance, the show’s characters asked, “Why are we here?” and whether their existence was a “cosmic joke,” prompting some on social media to speculate in jest that the program had become self-aware.

The show’s programming was harmless, if awkward, until the transphobic comments on Monday. They were generated when the show’s team changed the program it was using to create the “Nothing, Forever” script due to a GPT-3 service outage, Hartle wrote.

GPT-3 offers several different models, which vary in speed and processing power. The “Nothing, Forever” team lost connection to the model it was using, Davinci, and switched to a different one, Curie, to keep the show running, Hartle wrote. Curie generated the offensive lines of dialogue, he added.

Upon investigating, the “Nothing, Forever” team found it was not using OpenAI’s content moderation tools, which, according to Hartle, aren’t automatically applied to GPT-3’s output.

“We had made an assumption that content moderation was baked into the Davinci and Curie models, but we were unfortunately mistaken,” Hartle wrote.

What is ChatGPT, the viral social media AI?

OpenAI’s website lists a free moderation tool that developers may use to identify whether generated text contains hateful speech. Its usage policy forbids the generation of content promoting “hate based on identity” and recommends developers use their moderation tool to remain compliant, but does not clarify if GPT-3 moderates itself.

Hartle wrote that “Nothing, Forever” would no longer use Curie as a backup and will implement OpenAI’s content moderation tool to scan the show’s dialogue going forward. The team will also investigate using a secondary moderation system as a redundancy, he wrote.

Hartle acknowledged the risks of what he called a “very new and undiscovered space,” but wrote that he was excited by the potential for generative AI making it easier to create shows and films.

He added that he wants to continue growing the show and create more like it in the future.


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *