[ad_1]
Snowflake Inc. and Nvidia Corp. announced a team-up today during Snowflake Summit 2023 that will provide customers a way to build custom generative artificial intelligence applications more easily using their own data within a secure cloud environment.
Many enterprises see generative AI as a type of assistant, using it to augment employee knowledge by having it answer questions, provide research and summarize information about data already present within a company. That means AI models require access to a tremendous amount of internal company-specific data.
Using the Snowflake Data Cloud, enterprise customers can feed proprietary data into large language models for customization, allowing the training and fine-tuning of generative AI chatbots without moving it. That allows the data to remain fully governed where it’s at, maintaining security and privacy, while also reducing overhead.
Through the partnership with Nvidia, Snowflake customers will be able to access LLM foundation models and training capabilities through Nvidia NeMo, an end-to-end enterprise framework for building, customizing and deploying generative AI models.
Manuvir Das, vice president of enterprise computing at Nvidia, likened an LLM foundation model to a newly hired employee, with basic knowledge and skills, such as the capability to answer general questions, write an essay, do math and produce code. But what a company really wants is a customized LLM that can be endowed with specific knowledge about the company and up-to-date information in order to keep up.
“From the company’s point of view, what you’d really like to have is not just this new hire that’s straight out of college but an employee that’s been working at your company for 20 years,” Manuvir said. “They know about the business of your company, they know about the customer’s previous interactions and they have access to databases. The difference is really the data the company has.”
Combining Data Cloud and NeMo allows companies to train customized LLMs that are capable of gaining skills specific to a company’s particularized domain of knowledge and expertise and pull information from data sources that are held securely within the borders of its cloud. NeMo also permits the company to build, manage and deploy generative AI applications that can provide various use cases where the data resides, reducing both cost and latency.
“Together, Nvidia and Snowflake will create an AI factory that helps enterprises turn their own valuable data into custom generative AI models to power groundbreaking new applications — right from the cloud platform that they use to run their businesses,” said Jensen Huang, founder and chief executive of Nvidia.
The focus on domain-specific data has become increasingly keen when it comes to bespoke generative AI models, especially for industries that cover specialized industries. Snowflake’s unified cloud data platform also offers industry-specific data clouds to help deliver solutions for numerous industries including advertising, media, finance, healthcare, sciences, retail and technology. Now with this partnership, those same industries can use NeMo to deliver custom generative AI apps in conjunction with those data clouds.
“More enterprises than we expected are training or at least fine-tuning their own AI models, as they increasingly appreciate the value of their own data assets,” said Alexander Harrowell, principal analyst for advanced computing for AI at technology research group Omdia. “Similarly, enterprises are beginning to operate more diverse fleets of AI models for business-specific applications.”
Snowflake customers will also gain access to Nvidia’s NeMo Guardrails, a software mechanism that helps make certain that generative AI models remain accurate and secure when in operation. Although LLMs are a powerful assistive technology, AI chatbots have been known to go off the rails, produce unwanted responses or outright errors, in a process better known in the industry as “hallucinations.”
There has been some research put into reducing hallucinations and making generative AI more predictable and secure, but it’s still an ongoing field of study. With NeMo Guardrails, customers will be able to provide their own boundaries for their AI models to reduce unexpected behavior.
Image: Nvidia
Your vote of support is important to us and it helps us keep the content FREE.
One-click below supports our mission to provide free, deep and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.
THANK YOU
[ad_2]
Source link