[ad_1]
It’s going to be a big week for announcements related to artificial intelligence. With that in mind, today I want to talk a bit about the challenges I’ve found in covering the rise of generative AI as it works its way into the product roadmaps of every company on my beat.
Unlike other technological shifts I’ve covered in the past, this one has some scary (and so far mostly theoretical) risks associated with it. But covering those risks is tricky, and doesn’t always fit into the standard containers for business reporting or analysis. For that reason, I think it’s worth naming some of those challenges — and asking for your thoughts on what you think makes for good journalism in a world where AI is ascending.
To start with, let’s consider two recent perspectives on the subject from leading thinkers in the field. One is from Geoffrey Hinton, an AI pioneer who made significant strides with neural networks, a key ingredient in the field’s recent improvements. Last week Hinton left his job at Google in part so he could speak out about AI risk, and told the New York Times’ Cade Metz that “a part of him … now regrets his life’s work.”
“It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton said. Among his concerns: a flood of misinformation that makes it impossible to discern what is true; massive job losses through automation; and killer robots.
So that’s one set of possible outcomes. Here’s another, from Jürgen Schmidhuber, who is sometimes called “the father of artificial intelligence.” He argues AI fears are misplaced, and that whatever bad actors do with AI can likely be countered with good actors using AI.
Here’s Josh Taylor in the Guardian:
Schmidhuber believes AI will advance to the point where it surpasses human intelligence and has no interest in humans — while humans will continue to benefit and use the tools developed by AI. This is a theme Schmidhuber has discussed for years, and was once accused at a conference of “destroying the scientific method” with his assertions.
As the Guardian has reported previously, Schmidhuber’s position as AI’s father is not undisputed, and he can be a controversial figure within the AI community. Some have said his optimism about the rate of technological progress was unfounded and possibly dangerous.
Whether you find yourself more inclined here to believe Hinton or Schmidhuber seems likely to color how you might cover AI as a journalist. If you believe Hinton’s warnings, and we are starting down a path that leads to killer robots or worse, it could make sense to center that risk in all coverage of AI, no matter how seemingly benign the individual announcement.
If, on the other, you’re more sympathetic to Schmidhuber, and think that all of the problems created with AI will resolve themselves without causing much damage to society at all, you’d probably spend more time covering AI at the level of products and features and how people are using them in their lives.
The reason I’m having trouble covering AI lately is because there is such a high variance in the way that the people who have considered the question most deeply think about risk. When the list of possible futures ranges from fully automated luxury communism to a smoking ruin where our civilization used to be, where is the journalist supposed to begin? (The usual answer is to talk to a lot of people. But the relevant people here are saying very different things!)
All of this is on my mind lately for a couple reasons. One is that I recently spent some time talking with AI safety researchers who I thought made a convincing case that, no matter how much time executives and regulators spend warning us about the risks here, the average person still probably hasn’t grappled with them enough. These folks believe we essentially need to shut down AI development for a long while, invest way more money into safety research, and prevent further commercial development until we’ve developed a strategy to avoid the worst outcomes.
The other reason it’s on my mind is that Google I/O is this week. On Wednesday the company is expected to showcase a wide range of new features drawing on its latest advancements in generative AI, and I’ll be there to cover it for you. (The Wall Street Journal and CNBC appear to have scooped some of the announcements already.)
The Google announcements represent the fun side of AI: the moment when, after years of hype, average people can finally get their hands on new tools to help them with their work and daily lives. Even the most diehard believer in existential risk from AI can’t deny that, at least for the moment, tens of millions of people are finding the tools extremely useful for a broad range of tasks.
One of my biases is that I started writing about tech because I love stuff like this: incremental advances that help me research faster, write better, and even illustrate my newsletter. Even as I’ve increasingly focused my writing on business coverage and tech policy, the instinct to say “hey, look at this cool thing” remains strong within me.
And if — please! — Schmidhuber’s benign vision of our AI world comes to pass, I imagine I’ll feel fine about any incremental product coverage I did along the way to point people to useful new tools.
But what if Hinton’s vision is closer to the mark? (And it seems noteworthy that there are more AI researchers in his camp than Schmidhuber’s.) Will I feel OK about having written a piece in 2022 titled “How DALL-E could power a creative revolution” if that revolution turns out to have been a step on the road to, uh, a worse one?
Thinking through all this, I have in mind the criticism folks like me received in the wake of the 2016 US presidential election. We spent too much time hyping up tech companies and not enough time considering the second-order consequences of their hyper-growth, the argument went. (It’s truer to say we criticized the wrong things than nothing at all, I think, but perhaps that’s splitting hairs.) And while opinions vary on just how big a role platforms played in the election’s outcome, it seems undeniable now that if we could do it all over again we would probably cover tech differently from 2010 to 2016 than a lot of us, myself included, actually did.
The introspection we did after 2016 was easier in one key respect than the question we face now, though. The tech backlash of 2017 was retrospective, rooted in the question of what social networks had done to our society.
The AI question, on the other hand, is speculative. What is this thing about to do to us?
I don’t want to set up a false dilemma here. The question is not whether AI coverage should be generally positive or generally negative. There is clearly room for a wide range of opinions.
My discomfort, I think, comes with the heavy shadow that all AI coverage has looming in the background — and the way that the shadow often goes unacknowledged, including by me. So many of the leading researchers and even AI executives spend a great deal of time warning of potential doom.
If you believe that doom is a serious possibility, shouldn’t you mention it all the time?
Or, as Max Read has written, does that sort of warning only end up hyping up the companies building this technology?
I haven’t come to any solid conclusions here. But today I offer a couple minor evolutions as my thinking changes.
One, I updated Platformer’s About page, a link to which gets emailed to all new subscribers, to add AI as a core coverage interest. On that same page, I also added this paragraph to the section on what I’ve come to believe:
Artificial intelligence promises to bring powerful advances in productivity and creativity. But it also poses serious and potentially existential risks to life as we know it. My coverage of AI is rooted in the belief that fears of massive disruption may be justified, and require urgent attention.
Adding a few lines on an About page isn’t of great to use to readers who happen upon the odd story from me here or there. But the nice thing about writing a newsletter is that many of you are dedicated readers! And now hopefully you have a more complete understanding of how I’m thinking about a subject I expect to return to often in the coming years.
At the same time, I am going to be writing about the AI products that platforms release along the way. Understanding how AI will shape the future requires having a good sense of how people are using the technology, and I think that means staying up to date with what platforms are building and releasing into the world.
When I write about these tools, though — even the most fantastically useful of them — I’ll strive to maintain the baseline skepticism that I tried to bring to this piece.
I’ll end what has been a long and uncharacteristically meta reflection here by saying the situation I’m describing here isn’t unique. Plenty of journalism is rooted in uncertainty in how events will play out, from US politics to climate change. Take your pick of potential catastrophes, and there’s probably a group of journalists figuring out how to capture the full range of perspectives in 1,200 words.
And personally, I started writing a daily newsletter because of the way it freed me from having to write a definitive take in every story. Instead I could just show up a few times a week, tell you what I learned today, and give you some ways to think about what might happened next.
It’s not perfect, but it’s the best that I’ve come up with so far. If you have other ideas, though, I’m all ears.
-
OpenAI narrowly avoided a shutdown in Italy, but an aggressive coalition of European regulators are mounting challenges to generative AI’s data privacy and copyright concerns. (Jess Weatherbed / The Verge)
-
Twitter faced severe criticism this past weekend for allowing graphic images of the shooting at Allen, Texas to go viral on the platform. Others argued that viewers should be confronted to the company’s ongoing, nightmare epidemic of mass killings. (Benjamin Mullin / The New York Times)
-
Chinese firms like Baidu, Huawei, and Alibaba are looking into ways to perform cutting-edge AI research using less-powerful semiconductors as a result of U.S. sanctions. (By Karen Hao and Raffaele Huang / WSJ)
-
Journalist Cristina Criddle detailed her experience with ByteDance after learning that she was one of the reporters the company spied on by gleaning location data from TikTok. TikTok was trying to identify the reporter’s sources and used an account dedicated to her cat; the piece is more measured about the experience than perhaps I would have been. (Cristina Criddle / Financial Times)
-
Investors in ByteDance stand to make billions when the company goes public, but many won’t discuss the company — and harbor concerns about TikTok’s ongoing controversies and the threat of a U.S. ban. (Alex Konrad / Forbes)
-
TikTok cataloged users who watched LGBT content for at least a year, sparking internal outcry and forcing the company to restrict access to the data. Just to get out ahead of this story — I was one of those users. (Georgia Wells and Byron Tau / WSJ)
-
An Idaho judge dismissed a lawsuit from the FTC against location data broker Kochava, and told the agency it needed more evidence to support its claim that customers suffer substantial harm from the practice. (Natasha Singer / The New York Times)
-
Bluesky said it won’t allow “heads of state” to join the platform as it continues onboarding new users and trying to get a handle on its growth and moderation approach. It continues to be a lot of fun, though. (Kylie Robison / Fortune)
-
Verified Facebook pages are getting hacked by malware scammers to spread malicious links using paid advertising, and in some cases the scammers are even pretending to be Meta itself. (Taylor Hatmaker / TechCrunch)
-
A Google engineer wrote a widely circulated critique of the company arguing that open source developers, not Big Tech or OpenAI, will likely lead development of cutting-edge AI advances. It’s worth reading! (Julia Love, Davey Alba, and Rachel Metz / Bloomberg)
-
Google’s AMP web standard has been largely abandoned by both the company and publishers, and did little to stave off the deterioration of the news business. AMP’s failure also offers a roadmap for how Google might try to use AI to keep its hold on search. (Casey contributed a tiny bit to this piece.) (David Pierce / The Verge)
-
Hackers at this year’s DEF CON have been invited to try and find bugs, biases, and other flaws in large language models built by Anthropic, Google, OpenAI, and others. (Jessica Lyons Hardcastle / The Register)
-
A look at the OpenAI contractors in the U.S. who make $15 an hour with no benefits to label data and help improve ChatGPT’s predictive capabilities. (David Ingram / NBC News)
-
OpenAI CEO Sam Altman said tech companies shifting to remote work was “one of the tech industry’s worst mistakes in a long time.” (Steve Mollman / Fortune)
-
AI chatbots are being used to write large volumes of online content, from recipe blogs to self-published how-to books, and often without clear disclosure. (Will Oremus / The Washington Post)
-
Early internet pioneer Vint Cerf discussed generative AI and other new tech frontiers and stressed the importance of safety in a new interview. (Devin Coldewey / TechCrunch)
-
Amazon is planning to offer generative AI tools for advertisers to create sponsored videos as it plans an expansion of its ad business to its Freevee and Prime Video streaming platforms. (Theo Wayt / The Information)
-
Discord’s user name change is causing strife within the platform’s culture, which has roots in internet anonymity and a distaste for mainstream social media norms. On the other hand the current approached to naming users is insane. (Umar Shakir / The Verge)
-
Elon Musk said Twitter will be “purging” the accounts of users who had not tweeted “for several years.” The company considered this in 2019 and ultimately abandoned plans after realizing it meant removing accounts of the deceased before the company could build any memorialization features. (Twitter)
-
The New York MTA resumed automated transit alerts on Twitter after the company backtracked on its plan to charge $50,000 per month for API access. A step in the wrong direction. (Brian Fung / CNN)
-
Musk promised Twitter employees stock grants to stick around, but the company has yet to deliver, and workers are now worried the grants are plummeting in value as the business withers. (Kali Hays / Business Insider)
-
Musk also invented a nebulous new metric for judging the success of Twitter — “unregretted user minutes” — and the company is now trying to figure out how to measure it. (Tim Higgins / WSJ)
-
Less than half of the 150,000 or so Twitter Blue subscribers who signed up to the revamped service in November have continued to pay for it as of April 30. (Matt Binder / Mashable)
-
Twitter finally acknowledged a bug that showed people’s private tweets made using the Circles feature to users who were not part of the group and even random strangers as well. (Filipe Espósito / 9to5Mac)
-
Fortnite became the latest video game added to the International Olympic Committee’s Esports Series under “sport shooting” category. The IOC previously said it would not feature violent shooting games, but Fortnite appears to be family-friendly enough. (Tom Phillips / Eurogamer)
-
Former Yahoo CEO Marissa Mayer said she regretted acquiring Tumblr instead of a streaming video service like Hulu. Tumblr did lose 99.8% of its value when Yahoo it sold to Automatic for $3 million, so that seems about right. (Patrick Kulp / Tech Brew)
-
New event-planning platform Partiful ditches social networks and email in favor of an SMS-based invite and reminder system. (Ann-Marie Alcántara / WSJ)
-
Amazon made Inspire, its TikTok-like in-app shopping feed, available to all U.S. customers. (Aisha Malik / TechCrunch)
For more good tweets every day, follow Casey’s Instagram stories.
(Link)
(Link)
(Link)
Send us tips, comments, questions, and your thoughts on AI coverage: [email protected] and [email protected].
[ad_2]
Source link