Highlights:

  • Large language models represent a class of artificial intelligence algorithms that harness extensive datasets to achieve human speech recognition, translation, prediction, and content generation functions.
  • ai’s platform assists developers and businesses in gaining a deeper understanding of their LLM product performance by analyzing user transcripts of AI conversations.

Recently, Context.ai, a company specializing in product analytics and offering a platform for comprehending applications driven by large language models in artificial intelligence, has secured USD 3.5 million in funding.

The funding round was jointly led by Google Ventures, the venture capital arm of Alphabet Inc., and Tomasz Tunguz from Theory Ventures, with additional participation from 20SALES.

Large language models represent a class of artificial intelligence algorithms that harness extensive datasets to achieve human speech recognition, translation, prediction, and content generation functions. They can also respond using humanlike conversational language, like the immensely popular chatbot ChatGPT from OpenAI LP. Many businesses have integrated LLMs into their applications, enabling users to “talk” to their products and access their data.

Context.ai offers an analytics service that gives customers insights into LLMs’ performance in discussing various topics. It assists in evaluating product performance and facilitates debugging by providing a comprehensive view of user interactions.

Alex Gamble, the Co-founder and Chief Technology Officer of Context.ai, said, “The current ecosystem of analytics products are built to count clicks. But as businesses add features powered by LLMs, text now becomes a primary interaction method for their users. Making sense of this mountain of unstructured words poses an entirely new technical challenge for businesses keen to understand user behavior.”

Context.ai’s platform assists developers and businesses in gaining a deeper understanding of their LLM product performance by analyzing user transcripts of AI conversations. It performs topic clustering and keyword analysis to identify the most prominent and discussed subjects. This approach aids in analyzing user preferences and requirements from the system, enabling more precise tuning and enhanced support to meet user needs effectively.

Additionally, the platform offers sentiment analysis, allowing for assessing user satisfaction with responses on a topic-by-topic basis. This gives customers insights into user interactions with the product, their objectives, the product’s alignment with user needs, and areas where it may fall short in meeting those needs.

Product developers can take proactive measures to mitigate potential challenges in the composition of the LLM, such as minimizing the risk of mishandling sensitive topics or delivering incorrect responses. For instance, it may deviate from the intended behavior, produce inaccurate responses, or engage in conflicts with customers. For instance, it may deviate from the intended behavior, produce inaccurate responses, or engage in conflicts with customers.

Henry Scott-Green, the Co-founder and Chief Executive Officer of Context.ai, said, “It’s hard to build a great product without understanding users and their needs. Context.ai helps companies understand user behavior and measure product performance, bringing crucial user understanding to developers of LLM-powered products.”

The platform is model-agnostic and supports a wide array of foundational models, enabling users to seamlessly integrate its software development kit with the LLM of their choice for analysis.

The company announced that it intends to utilize the investment to expand its engineering team to enhance its suite of features and tools available for enterprise customers.

Context.ai serves several prominent companies as its customers, including Cognosys, an AI-agent service, Lenny’s Newsletter, a weekly advice column; Juicebox, an AI-powered people discovery platform; and ChartGPT, an AI-powered charting solution.