Highlights:

  • Imbue trains models through purpose-built datasets designed to nurture reasoning abilities.
  • The theoretical underpinnings of deep learning are also a focus of Imbue’s research.

Imbue, a startup that develops language models that are enhanced for reasoning, recently declared the closing of a USD 200 million funding round.

According to the Series B investment, Imbue has a one billion dollar market value. Nvidia Corp. and the nonprofit Astera Institute, which supports innovative research projects, both contributed to it. Kyle Vogt, the CEO of Cruise LLC, Simon Last, the Co-founder of Notion Labs Inc., and several other backers also participated.

According to Imbue, it is creating large language models “tailor-made for reasoning.” As the business explained: “Robust reasoning is necessary for effective action. It involves the ability to deal with uncertainty, to know when to change our approach, to ask questions and gather new information, to play out scenarios and make decisions, to make and discard hypotheses, and generally to deal with the complicated, hard-to-predict nature of the real world.”

The parameters in the company’s models are more than 100 billion —which control how a neural network interprets data. Llama-2 is a sophisticated language model that Meta Platforms Inc. released in July; it has a deployment limit of 70 billion parameters.

Imbue uses datasets it created specifically to develop reasoning abilities to train models. The company claims that training is conducted on a server cluster that has 10,000 of Nvidia Corp.’s most advanced H100 graphics processing units. The H100 is up to 30 times faster than the chipmaker’s previous fastest GPU in processing large language models.

Imbue has also invested in specialized development tools and infrastructure to support its engineering work. The company’s researchers described an application called CARBS in June as one such tool. It makes it simpler to optimize the hyperparameters of a neural network, which controls how quickly and accurately data is processed.

The theoretical underpinnings of deep learning are also a focus of Imbue’s research. Self-supervised learning is one of the company’s research priorities in this field, according to the company.

In the past, labeled datasets were used to train artificial intelligence models. Such datasets include files that have been enhanced with contextual data to aid neural networks in learning more effectively. In contrast, a self-supervised AI model can be created using unlabeled data lacking contextual information.

Imbue powers various automation applications called agents with its large language models. According to the company, most agents are made to automate coding tasks. Imbue engineers utilize some of them to aid in their daily work.

Imbue detailed in a recent funding announcement, “Because programming problems are so objective — the code either passes the tests or doesn’t — such problems form a relatively ideal testbed for more generalized reasoning abilities, allowing us to understand if we are making meaningful improvements in our underlying systems.”

The business declared that it is not currently working on commercializing its AI-powered coding agents. Imbue does, however, intend to make its technology available to the general public in the long run. Large language models will be utilized for coding tasks as well as to “enable anyone to build robust, custom AI agents that put the productive power of AI at everyone’s fingertips.”