Five months ago, a small San Francisco startup called OpenAI upended the tech industry — and the rest of the world — when it released ChatGPT. The app showed millions of people the immense capabilities of generative AI, how it can do everything from write original poetry to churn out working lines of code, all in a matter of seconds.

It quickly became clear that AI technology like ChatGPT had the potential not only to radically change the way we consume and create information but to transform every aspect of our daily lives. And it threatened Google’s business to its core.

It’s against that backdrop that Google invited journalists like me to visit Shoreline Amphitheatre in Mountain View, California, for the company’s much-anticipated annual I/O developer conference. The keynote presentation on Wednesday was Google’s chance to recapture the excitement it lost to OpenAI and the startup’s main investor, Microsoft, which ate Google’s lunch in February by releasing AI-powered search features in Bing and a corresponding chatbot, BingGPT.

Google is now facing the possibility of losing its dominance in the search market and reputation as a leader in AI, a technology many feel is as revolutionary as the mobile phone or the internet itself. Now, in order to reclaim its place as the company leading the charge on this rapidly developing technology, Google is putting AI into virtually all of its most popular products — despite the technology’s known flaws.

It was clear from the start of Google’s big event on Wednesday that AI was the star. Before executives presented onstage, electronic artist Dan Deacon played clanging music generated by Google’s AI technology as he recited poetic lyrics with psychedelic-looking AI-generated illustrations behind him. After Deacon wrapped his musical AI mystery tour, Google CEO Sundar Pichai took the stage.

“Seven years into our journey, we are at an exciting inflection point. We have an ability to make AI even more helpful,” he said onstage at Wednesday’s presentation. “We are reimagining all our core products, including search.”

But underneath the buzz was an air of nervousness about what Google is about to unleash on the world. In the coming weeks, billions of people will see generative AI in everything from Google search to Gmail to services powered by Google’s cloud technology. The update will, among other things, let people use AI to compose emails in the Gmail mobile app, create new Google Docs presentations with AI-generated images based on a few keywords, and text their friends on Android in Shakespearean-style prose spun up by AI. While these new generative AI applications could supercharge Google’s products and give better productivity and creativity tools to the masses, the technology is also prone to error and bias, and if executed poorly, it could damage Google’s core mission to serve its users reliable information.

Of the many ways Google is changing its apps with AI, search is the most meaningful. In the coming weeks, a limited group of beta testers will experience a new, more visual Google search experience. It looks familiar in many ways to the old Google search, but it works in some fundamentally different ways.

In the new Google search, when you enter a search query, you don’t just get a long list of blue links. Instead, Google will show you a few results in gray boxes before serving up a large, AI-generated block of text inside a light-green box that takes up a majority of the screen. This result is supposed to give you the information you’re looking for, gathered from disparate sources across the web and written in an approachable tone. To the right of the AI-generated result, you’ll also see a few links most relevant to your search. There are also some green boxes underneath the AI result, in which Google prompts you to go deeper by asking suggested follow-up questions, or come up with your own. And if you click into the actual text of the AI result, you’ll find links to the websites that Google pulled the information from. If you don’t like the new search experience, you can toggle back to the old one.

It’s by far the most drastic change to Google’s search engine that has been the backbone of the web for over 20 years. In fact, Google seems to be moving away from the term “search” and toward “converse.

Google’s AI search runs, in part, on a new, underlying technical model called PaLM2, which was also released on Wednesday. While it works much like Google’s old model, PaLM, Google says it’s better at language, reasoning, and code, and can run more quickly. Building on that technology, Google’s new search generative experience, or SGE, is supposed to be more conversational, more natural, and better at answering complicated questions than regular search. Google says the new search experience can help people with everything from planning a vacation to answering complex questions about the news of the day.

When I briefly tested SGE at Google’s offices on Tuesday, I asked a series of questions about whether WhatsApp was listening to my conversations, a topic about which Elon Musk recently raised questions, and it gave pretty reasonable answers.

First, the new Google tech told me that WhatsApp’s messages are secured with end-to-end encryption, a basic fact I could have found by doing a traditional Google search. But when I asked a follow-up question about whether Musk was right to question our trust in WhatsApp, it also gave some extra context that I might not have seen in a traditional search. SGE mentioned a known bug in Android that likely contributed to the confusion about when WhatsApp is accessing people’s microphones. But it also wrote that while WhatsApp is encrypted, it’s owned by Meta, a company that “historically monetizes personal information for advertisers,” and under certain circumstances, like political investigations, complies with government requests for data about you. Those are all correct statements and could potentially be relevant background information if I were to write an article on the topic.

A screenshot of Google’s new AI search experience it’s starting to test in the coming weeks
Google

In my few minutes using the tool, I could see the potential of a more conversational version of search that stitches together disparate data sources to give me a fuller picture of whatever I’m writing about. But it also presents major risks.

Soon after its release in March, Google’s experimental AI chatbot, Bard, was generating incorrect or made-up answers. Known in the AI field as “hallucinations” — when an AI system essentially invents answers it doesn’t know — these types of errors are a common issue with large language model chatbots.

The threat of a user encountering these hallucinations could harm Google’s reputation to deliver on its core mission to reliably organize the world’s information. After Bard incorrectly answered a factual question about the history of telescopes in one of its first public demos, Google lost $100 billion in market value. And although Bard was built with safeguards to avoid producing polarizing content, outside researchers found that with a little goading, it could easily spit out antisemitic conspiracy theories and anti-vaccine rhetoric.

In my demo on Tuesday, Google VP of Search Liz Reid said that Google has trained SGE to be less risky than Bard, since it’s a core part of Google’s flagship product and should have a lower margin of error.

“We need to push more on factuality, even if it means sometimes you don’t answer the question,” said Riedy.

Google also says its new AI search engine will not answer queries when it’s not confident about the trustworthiness of its sources or when it comes to certain subject matters, including medical dosage advice, information about self-harm, and developing news events. Google says it’s gathering feedback from users, and the company emphasized that it’s still being refined as it gets rolled out through Google’s new experimental search product group, Search Labs.

In the coming weeks, as early adopters pressure test Google’s new search experience and the other AI features in other Google products, they may wonder if these products are ready for primetime, and whether the company is rushing these public AI experiments. Some Google employees have been outspoken about these same concerns.

But Google, whose mission is to make the world’s information more universal and accessible, now finds itself in the unfamiliar position of hurrying to keep pace with its competitors. If it doesn’t get these new features out, Microsoft, OpenAI, and others could eat away at its core business. And at this point, the generative AI revolution seems all but inevitable. Google wants everyone to know it’s no longer holding back.

A version of this story was first published in the Vox technology newsletter. Sign up here so you don’t miss the next one!



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *