• Enterprise customers utilize Snowflake Data Cloud to customize language models, training, and fine-tuning generative AI chatbots without data migration.
  • Snowflake’s cloud data platform provides industry-specific advertising, media, finance, healthcare, sciences, retail, and technology solutions.

During Snowflake Summit 2023, Snowflake Inc. and Nvidia Corp. announced a partnership enabling customers to more quickly and easily create unique generative artificial intelligence applications using their data in a secure cloud setting.

Many businesses view generative AI as an assistant that can supplement employee knowledge by providing answers to queries, conducting research, and summarizing details about data already available within the organization. This means that a significant amount of company-specific internal data must be made available to AI models.

Enterprise customers can customize large language models using proprietary data fed into the Snowflake Data Cloud, allowing for the training and fine-tuning of generative AI chatbots without moving the data. This keeps the data entirely under control where it is, upholding security and privacy while also cutting back on administrative costs.

Customers of Snowflake will have access to LLM foundation models and training capabilities via Nvidia NeMo, an end-to-end enterprise framework for developing, customizing, and deploying generative AI models, thanks to the partnership with Nvidia.

An LLM foundation model, according to Manuvir Das, Vice President of enterprise computing at Nvidia, is comparable to a newly hired employee with fundamental knowledge and abilities, such as the capacity to respond to general inquiries, compose essays, perform mathematical operations, and write code. But a business needs a customized LLM that can be loaded with detailed business knowledge and up-to-date information to stay current.

Manuvir said, “From the company’s point of view, what you’d really like to have is not just this new hire that’s straight out of college but an employee that’s been working at your company for 20 years. They know about the business of your company, they know about the customer’s previous interactions and they have access to databases. The difference is really the data the company has.”

Combining Data Cloud and NeMo enables businesses to develop tailored LLMs to learn skills specific to a company’s particular domain of knowledge and expertise and safely access data sources within its cloud. NeMo also enables the business to create, oversee, and roll out generative AI applications that can provide a variety of use cases depending on where the data is located, lowering latency and cost.

Founder and Chief Executive of Nvidia, Jensen Huang, said, “Together, Nvidia and Snowflake will create an AI factory that helps enterprises turn their own valuable data into custom generative AI models to power groundbreaking new applications — right from the cloud platform that they use to run their businesses.”

When creating custom generative AI models, the emphasis on domain-specific data has become increasingly important, particularly for sectors that cover specialized industries. To help deliver solutions for a variety of industries, including advertising, media, finance, healthcare, sciences, retail, and technology, Snowflake’s unified cloud data platform also provides industry-specific data clouds. Thanks to this partnership, these same sectors can now use NeMo to deliver unique generative AI apps in conjunction with those data clouds.

Alexander Harrowell, Principal Analyst at technology research firm Omdia for advanced computing for AI, “More enterprises than we expected are training or at least fine-tuning their own AI models, as they increasingly appreciate the value of their own data assets. Similarly, enterprises are beginning to operate more diverse fleets of AI models for business-specific applications.”

Customers of Snowflake will also have access to Nvidia’s NeMo Guardrails, a piece of software that helps ensure that generative AI models are safe and accurate while in use. Although LLMs are a practical assistive technology, AI chatbots have occasionally been known to deviate from the script, give inappropriate answers, or make outright mistakes—a phenomenon known in the industry as “hallucinations.”

While some research has been done to lessen hallucinations and make generative AI more secure and predictable, more work must be done in these areas. Customers can set their own boundaries for their AI models with NeMo Guardrails, which will lessen unexpected behavior.