[ad_1]
A leaked memo from a Google employee makes a bold claim that’s taking hold in Silicon Valley and beyond: Big Tech’s advantage in artificial intelligence is shrinking quickly.
The memo, published Thursday on the website for the tech research firm SemiAnalysis, soon became a top story on AI forums, including the popular HackerNews message board and Reddit’s /r/MachineLearning community, which has more than 2.6 million members, and sparked commentary from some of the biggest names in AI.
A Google spokesperson confirmed the memo was authentic but said it was the opinion of one senior employee, not necessarily the company as a whole.
“We’ve done a lot of looking over our shoulders at OpenAI,” the memo said. “But the uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI.”
“I’m talking, of course, about open source. Plainly put, they are lapping us,” it continued. “While our models still hold a slight edge in terms of quality, the gap is closing astonishingly quickly.”
Tech companies spent years developing AI in a behind-the-scenes arms race, only to see OpenAI burst onto the scene with the release of ChatGPT. Other generative AI systems, which can create content based on a user’s prompt, have skyrocketed in popularity and quality. OpenAI announced a massive deal with Google competitor Microsoft in January. In response, Google rushed to release its counterpart program, Bard.
And while those companies are working to integrate AI into many parts of their existing businesses, that doesn’t mean the technology will further solidify their grip on the tech industry.
The Google employee wrote that both Google and Microsoft have neglected the growing community of programmers and companies that rely on publicly available AI code and models — referred to as “open-source” — to create smaller but more efficient projects.
Some experts and industry analysts agreed with the memo’s warning. Andrej Karpathy, an OpenAI founder who rejoined the company in February, wrote on Twitter Saturday that the surge of small AI companies competing with tech giants was starting to massively shake up the industry.
That ecosystem is “experiencing early signs of a cambrian explosion,” he wrote, a reference to a period of Earth history, more than 500 million years ago, marked by rapid and diverse evolution of life.
Open source refers to software code that is released publically for anyone to build on or customize. While most large tech companies keep their internal programs a tightly held corporate secret, open source communities share, tinker and collaborate with each other. Popular open source programs include the Firefox browser and VLC media player.
The open source AI community received a giant and unexpected gift earlier this year, when an unknown person leaked Facebook owner Meta’s equivalent to ChatGPT, LLaMa — which still has not been formally introduced to the public — onto the message board 4chan.
That leak gave open source AI programmers a foundation for their own, more customized projects.
“TL;DR: AI can’t be stopped because anyone can play with it, and the whole discussion of ‘guardrails’ and ‘moratoria’ is academic,” tweeted Pedro Domingos, a professor emeritus of computer science at the University of Washington.
A key point lies in how much data is needed to train an AI system, once considered a major challenge for smaller developers.
Simon Willison, a programmer and tech analyst and blogger, told NBC News that while OpenAI’s flagship product, ChatGPT-4, is still leading the industry, not every AI product needs to be built on the massive troves of data that it has been until now.
“I don’t think I need something as powerful as GPT-4 for a lot of things that I want to do,” Willison said. “I want models that can do the thing that Bing and Bard does, where if it doesn’t know something, it can run a search.”
“The open question I have right now is, how small can the model be while still being useful?” he said. “That’s something which the open source community is figuring out really, really quickly.”
Mark Riedl, a computer scientist and professor at Georgia Tech, said that major tech companies ceding their AI advantage to individual programmers and small companies would likely democratize the technology for society’s benefit, but could also lead to malicious use.
“Largely, I think people are trying to do good with these things, make people more productive or are making experiences better,” he said. “You don’t want a monopoly, or even a small set of companies kind of controlling everything. And I think you’ll see a greater level of creativity by putting these tools into the hands of more people.”
“It really now becomes the question of what are people going to use these things for.” Riedl said. “There’s really no restrictions on making specialized versions of models that are designed specifically to create toxic material, or misinformation, or to spread hate on the internet.”
[ad_2]
Source link