[ad_1]
The latest advances in AI (GPT, LLM, transformers, etc.) are like a Nokia phone in the 90’s – everyone could see the appeal, but no one could predict all that it would lead to. The tech world has a new obsession with Large Language Models (LLMs), GPTs, and AI in general.
Editor’s Note:
Guest author Jonathan Goldberg is the founder of D2D Advisory, a multi-functional consulting firm. Jonathan has developed growth strategies and alliances for companies in the mobile, networking, gaming, and software industries.
Almost all of our news feeds are filled with AI content. We know many software startups who are being told they have to have GPT-something in their product or they will not get funded. And then of course, the general media is consumed with stories about AI alarmism and various billionaires with their GPT thoughts. For our part, we have read a very large number of papers, blog posts and even Stanford’s 300+ State of AI report.
Despite all of that, we are not convinced.
There is no question that LLMs and transformers are important technically, The latest developments mark a major breakthrough in software capabilities. That being said, we are not sure anyone really knows what to do with those capabilities.
A few weeks ago, we spoke at the AI Edge Summit, where the organization’s Chairman, Jeff Bier, said something that catalyzed our view of AI and GPT. To paraphrase, he said ChatGPT is like seeing the first Nokia phone in the 1990’s. We had all heard about mobile phones before that, and these Nokia devices were for many the first phone that looked like something we would actually want to buy. But at the same time, no one looking at the device then would be able to predict all the things that would eventually stem from it – 3G, mobile data, smartphones, the iPhone, apps, and a complete reorganization of how we structure our time and daily activities.
That seems like a good analogy for ChatGPT. It is useful. The first “AI” application that is useful to ordinary people, but not something that is going to change their lives too meaningfully. For those who have been watching technology for a long time, it is clear that LLMs and transformers have immense potential, we may very well just be scratching the surface of what they can provide.
This has a few implications for what happens next:
- We are very much in the middle of a massive hype cycle. Absent some incredible product surprise, this cycle will eventually fade away and turn to a trough of doubt and despair. It is no coincidence that the media’s eye of Sauron has turned so intently on AI just as the rest of the Bubble is deflating. As always, the oracles at The Onion said it best.
- No one really knows what all of this means. Maybe somewhere there is a rogue genius sitting in her cubicle or his mother’s basement with a vision of 1,000 suns pointing the way forward. For everyone else, the future is much less certain. There are plenty of people who argue (very quietly right now) that AI is a dead end, with ChatGPT as just the latest version of chat bots (remember when those were the hot thing? It was only a few years ago.) There are also AI maximalists currently building their Skynet-proof bunkers in preparation for the imminent AI apocalypse because LLMs are just that awesome. Of course, the reality is somewhere in between.
- We need to remember that AI is just software. These latest new tools are very powerful, but for the foreseeable future we should mostly just expect some aspects of our interaction with software to improve. Developers definitely seem to be enjoying huge benefits from tools like Microsoft’s Copilot. Everyone else can probably just expect better written spam e-mail content for the time being.
We do not mean to be pessimistic, we are shooting for realistic. From what we can tell, LLMs and GPT offer huge potential to tackle really large data sets. Critically, transformers are probably going to allow us to interrogate problems that previously were too big to approach, or even data problems we had not even realized existed before. Moreover, there is the tantalizing possibility that these gains will be self-reinforcing, a Moore’s Law for data analysis. This is important, albeit unexplored.
Finally, we think everyone needs to take a more sober approach to the ethics and societal implications of these tools. We do not usually cover this subject, and would skip over it here except for the fact that almost everyone engaged in these advances seems to be blithely (maybe deliberately) avoiding the subject.
We are likely months away from the ability to create highly realistic videos of anything. Anything. That is going to mess with a lot of people’s heads and maybe we should take a more constructive approach to preparing the world at large for what that means. At the same time, the alarmists calling for a complete end to AI need to face the reality that the ship has sailed.
All in all, we are deeply excited by these latest developments. After years of incremental SaaS improvements being hailed as “technology advances,” it is exciting to have a genuinely compelling new capability before us. We just wish everyone would take a breath.
[ad_2]
Source link