[ad_1]

A few publishers made headlines recently over their adoption of ChatGPT — the artificial intelligence chatbot system launched by OpenAI last November — or similar AI technology to produce content for their websites, including BuzzFeed, CNET and Sports Illustrated. But so far, those publishers seem to be the outliers.

While editorial teams are experimenting with ChatGPT, six top editors and media executives who spoke with Digiday said they were not working on integrating the AI technology into the workflow of their newsrooms. And to the best of their knowledge, no one within their editorial teams was using ChatGPT to publish stories.

However, ChatGPT has gotten editors talking. They’re encouraging their editorial teams to familiarize themselves with ChatGPT, and are discussing how AI technology can assist journalists with their jobs.

“[We’ve] encouraged all of our editors to play around with it… to see how good it is at at providing answers and crafting stories in a certain voice,” said Emma Rosenblum, chief content officer at Bustle Digital Group.

Editors and chief content officers at BDG, Gizmodo, Forbes, Futurism, Trusted Media Brands and 1440 cited a number of reasons for hesitating before welcoming the technology with open arms — mainly, issues regarding inaccuracies, plagiarism and the underdevelopment of the technology.

ChatGPT is a “catalyst for publications to have conversations around the topic,” said Francesco Marconi, a computational journalist and co-founder of real-time information company Applied XL.

Embracing the new

There’s nothing new about AI technology being implemented in newsrooms. Forbes has its own AI and machine learning tools built into its CMS platform called Bertie to help journalists optimize headlines, generate descriptions and recommend images for a story, for example. The AP has been using AI technology to report on companies’ earnings for years. The Washington Post used AI technology to report on the Olympics and elections.

But the six publishing execs Digiday spoke with expressed both excitement and apprehension by the arrival of ChatGPT and the developments in AI technology.

“A large, disruptive technological change [happens] every ten-ish years, and I think this is the next one,” said Tim Huelskamp, co-founder and CEO of newsletter publisher 1440.

The opportunities

Forbes’ chief content officer Randall Lane and chief digital and information officer Vadim Supitskiy said the technology behind ChatGPT can serve as an assistant and research tool for journalists’ reporting, as it has the ability to scour the internet and spit out an answer to a prompt in moments. Eventually, those capabilities will be something they will want to integrate into Bertie to make their own CMS smarter.

ChatGPT can help journalists parse through large amounts of data or information to find a throughline, for example. It can also help summarize an article, provide variations on a headline and edit for grammar, as well as suggest authors in a relevant beat, Marconi said.

While TMB’s editors aren’t discussing how ChatGPT can be used as a content creation tool, they are experimenting with it as a research tool. Beth Tomkiw, TMB’s chief content officer, said editorial teams are experimenting with ChatGPT to see if it can provide additional ideas on topics to cover in certain verticals (such as asking the chatbot to provide an outline of topics on cleaning and organizing), and to provide lists of categories like “the top 25 books” in a certain year, she said. ChatGPT is an agenda item in TMB’s next monthly content leadership meeting, Tomkiw said.

Using ChatGPT for some of these tasks can one day “free up reporters from the boring, rote stuff,” like covering earnings or other “fill-in-the-blank” stories, said David Ewalt, editor-in-chief of G/O Media’s tech site Gizmodo. “One day, this technology will be a legitimate reporting tool that will help a reporter get the easy stuff done, so the reporter can… call sources and dig in and do the stuff that a computer can’t do. We’re just not there yet.”

The limitations

Despite the possibilities on the horizon, editors are tapping the breaks before speeding into adopting the technology in their newsrooms.

The biggest issue editors had with ChatGPT are the inaccuracies (CNET already ran into this problem).

“There are major, major issues with accuracy, with bias — every vice that is present in human writing on the internet is amplified by ChatGPT,” Ewalt said. “AI systems are nowhere near advanced enough to be able to tell the difference between a reliable source and an unreliable source. They’re just not there yet [and] aren’t going to be for a long time. So they’ll pull information from bad sources and repeat it as fact.”

While journalists make mistakes too, it’s not at the same level or volume as the blunders ChatGPT is making right now, said Futurism managing editor Jon Christian, who broke the news that CNET’s AI-generated stories were riddled with errors.

“If any publisher is thinking about experimenting with this stuff, treat its outputs with the scrutiny that you would treat a reporter doing their first assignment ever. Check everything,” Christian said.

Forbes’ Lane takes issue with the fact that ChatGPT only provides answers based on data available through 2021. “It’s not real time. So if a journalist is trying to write about anything newsy, [ChatGPT] by definition is writing things that are already known.”

There’s also the plagiarism issue. ChatGPT doesn’t clearly share its sources when it answers a prompt. Is it using full sentences someone else wrote? Is it paraphrasing someone else’s idea, without attribution? These are all major issues to consider, Ewalt said. 

As a test, this Digiday reporter asked ChatGPT, “Which media companies are using ChatGPT?” It provided examples of a few publishers. When this reporter asked it to share its sources for that information, all of the links it provided were broken.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *