Highlights:

  • This fine-tuning will enable users to develop personalized models that might match or surpass the capabilities of GPT-4.
  • OpenAI has allowed fine-tuning of GPT-3 versions like davinci-002 and babbage-002 but plans to retire them soon.

OpenAI LP, which created ChatGPT, recently disclosed that users can now adjust the GPT-3.5 Turbo model’s foundation using their own data.

The company claims that this enables users to produce unique models that might be on par with or even surpass GPT-4’s capabilities. In artificial intelligence, the process of fine-tuning involves taking a standard model, such as GPT-3.5 Turbo, and feeding it extra data to make it more focused and capable of handling very specific tasks.

Customers could, for example, develop a bot based on GPT-3.5 Turbo that is trained specifically to give accurate responses in a particular language or use clear language. The bot could be used for customer or employee support use cases after being trained on a particular knowledge base.

While OpenAI has long provided fine-tuning for GPT-3 variants like davinci-002 and babbage-002, it will be retiring those versions in the near future.

Customers who want to customize GPT-3.5 Turbo will have access to the base model, which was trained using data from the public internet as it existed at the end of September 2021, the company claims. Then, in order to improve its training, they will be able to feed it with their own data.

Other applications for refined models include developing a bot that is trained to mimic a brand’s voice for reasons of consistency or a code generator bot that provides developers with code snippet suggestions. Additionally, text prompts can be condensed to speed up calls to GPT-3 Turbo’s application programming interface and save money through fine-tuning. According to OpenAI, some customers could reduce their prompt sizes by up to 90%.

OpenAI wrote in a blog post, “Since the release of GPT-3.5 Turbo, developers and businesses have asked for the ability to customize the model to create unique and differentiated experiences for their users. This update gives developers the ability to customize models that perform better for their use cases and run these custom models at scale.”

Earlier this year, OpenAI released GPT-3.5 Turbo, claiming it was best for non-chat-specific applications. It can manage 4,000 tokens at once, which is twice as many as its previous models could. OpenAI defines “tokens” as word fragments that function as syllable equivalents; for instance, the words “fan,” “tas,” and “tic” are three examples of tokens that make up the word “fantastic.” The use of tokens is crucial for fine-tuning.

OpenAI explains, “Before the API processes the prompts, the input is broken down into tokens. These tokens are not cut up exactly where the words start or end — tokens can include trailing spaces and even sub-words.”

The cost to fine-tune GPT-3.5 Turbo is USD 0.0080 per 1,000 tokens for training, USD 0.0120 per 1,000 tokens for input usage, and USD 0.0120 per 1,000 tokens for chatbot output. With a training file of 100,000 tokens, or roughly 75,000 words, someone looking to fine-tune the model could anticipate paying about 2.40 dollars.

The most advanced AI model provided by OpenAI is not GPT-3.5 Turbo. That distinction belongs to GPT-4, which debuted in June and can comprehend text and images. Although it cannot yet be optimized, OpenAI stated that it intends to make this service available to customers later this year.