Highlights:

  • Claude 2 is well suited for the jobs like summarizing lengthy legal papers that are challenging to complete with other language models since it can examine vast amounts of data.
  • The AWS Generative AI Innovation Center, a project that Amazon announced earlier this year, is also included in the alliance.

Amazon.com Inc. to invest up to USD four billion in Anthropic, a startup from San Francisco developing Claude 2 large language model.

Recently, the corporations made the investment public. The agreement comes as Anthropic apparently prepares to create Claude Next, a very advanced foundation model that may be too hardware-intensive for even the most powerful supercomputers. The company should find it simpler to handle the model’s projected high development expenses due to the funding received from AWS.

Andy Jassy, Chief Executive Officer of Amazon, said, “We have tremendous respect for Anthropic’s team and foundation models, and believe we can help improve many customer experiences, short- and long-term, through our deeper collaboration.”

Former OpenAI LP researchers started Anthropic in 2020 with the goal of creating generative AI software. Its most recent and sophisticated model, Claude 2, is made to compete against OpenAI’s GPT-4. It can create marketing copy, figure out math issues, and translate spoken commands into computer code.

The capability of Claude 2 to process prompts with up to 100,000 tokens is one of its key characteristics. An AI development term for a small group of characters or integers is a token. Claude 2 is well suited for the jobs like summarizing lengthy legal papers that are challenging to complete with other language models since it can examine vast amounts of data.

Amazon will pay USD 1.25 billion for minority ownership in Anthropic as a part of the deal announced. The conditions of the agreement allow the global leader in cloud and online retail to potentially invest an extra USD 2.75 billion in the future. The valuation at which the investment was made wasn’t disclosed by the companies.

The capital round expands on a relationship that was established in 2021 when Anthropic signed up as a client of Amazon’s cloud division, Amazon Web Services Inc. Claude 2 was made accessible by AWS earlier last year via its Amazon Bedrock generative AI service. Following this investment, the companies intend to broaden their collaboration into other areas.

Anthropic will run most of its workloads on AWS going forward. The Amazon division will take over as its “primary cloud provider for mission critical workloads,” which will include model development and AI safety initiatives. To support its research, Anthropic will use instances powered by the cloud provider’s in-house created AWS Trainium and AWS Inferentia chips.

The second iteration of the Trainium processor family is designed specifically to train AI models with more than 100 billion parameters. Companies can deploy up to 16 of these processors per instance using AWS. The cloud giant claims that compared to competitor CPUs, Trainium processors can train AI models for a significant discount.

Inferentia chips from AWS, which are designed for inference, will also be used by Anthropic. Running previously trained models in production is what that entails. AWS claims that Inferentia chips can reduce expenses by 70% while offering up to 230% more throughput per inference than competitor silicon.

Anthropic will both use and contribute to the improvement of Amazon’s in-house AI chips. The companies intend to “collaborate in the development of future Trainium and Inferentia technology.”

The relationship between AWS and Anthropic will also grow in the go-to-market area. Anthropic has announced a “long-term commitment” to give AWS’ Bedrock service a channel for access to its upcoming AI models. Early access to additional features, such as the ability to modify AI models, will be given to joint customers.

Dario Amodei, Co-founder of Anthropic, said, “Since announcing our support of Amazon Bedrock in April, Claude has seen significant organic adoption from AWS customers. By significantly expanding our partnership, we can unlock new possibilities for organizations of all sizes, as they deploy Anthropic’s safe, state-of-the-art AI systems together with AWS’s leading cloud technology.”

The AWS Generative AI Innovation Center, a project that Amazon announced earlier this year, is also included in the alliance. AWS will use the program to send teams of machine learning specialists to assist clients in creating generative AI applications. Anthropic models will be available to developers at parent company Amazon to be used in product development initiatives.

John Blackledge, TD Cowen’s analyst, mentioned, “The agreement should further bolster AMZN’s offerings both at the Bedrock and application layer. Additionally, we view the deal as demonstrative of AWS’ hardware capabilities, namely for companies looking to utilize LLMs, as Anthropic now plans to run a majority of workloads on AWS while utilizing their Trainium and Inferentia chips for building, training, and deployment of future LLM versions.”

The investment made recently comes six months after it was revealed that Anthropic was looking to raise USD five billion by 2027. According to some internal documents, the business intended to utilize the funding to develop a complex foundation model called Claude Next. It is anticipated to be ten times more potent than the most advanced AI system currently available.

In accordance with the documents that were leaked, Claude Next will need processing hardware that can execute ten septillion FLOPs, or floating point operations per second. One followed by 24 zeros makes up one septillion. That is much larger than a quintillion, the unit used to measure the floating point performance of the fastest supercomputers in use today.

Whether Anthropic will need ten septillion FLOPs of computational power to train Claude Next or to handle users’ inference workloads is not yet known. In either case, creating infrastructure that can operate at such speeds will probably be expensive. With Amazon’s recent funding, Anthropic could have an easier time achieving its AI development objectives.