Highlights:

  • ChatGPT and PaLM RLHF share a secret ingredient in Reinforcement Learning with Human Feedback, an approach designed to align better language models with what users want them to achieve.
  • The size of PaLM is 540 billion parameters, where “parameters” refers to the pieces of the language model learned from training data.

Recently, Philip Wang, the developer who reverse-engineered closed-source AI systems like Meta’s Make-A-Video, released PaLM RLHF, a text-generating model that works like ChatGPT. The system uses a large language model from Google called PaLM and a technique called Reinforcement Learning with Human Feedback, or RLHF, to make a system that can do almost everything ChatGPT can, like write emails and suggest computer code.

Like ChatGPT, PaLM RLHF is mainly a way to predict words using statistics. When PaLM RLHF is given a considerable amount of training data, like posts from Reddit, news articles, and e-books, it learns how likely words appear based on patterns like the meaning of the text around them.

ChatGPT and PaLM RLHF share a secret ingredient in Reinforcement Learning with Human Feedback, an approach designed to align better language models with what users want them to achieve. RLHF entails training a language model — in PaLM RLHF’s case, PaLM — and fine-tuning it using a dataset with prompts and what human volunteers expect the model to say.

The prompts are then put into the fine-tuned model. This creates several responses, ranked from best to worst by volunteers. Lastly, the rankings are used to train a “reward model” that ranks the original model’s responses in order of preference to find the best answers to a given prompt.

The size of PaLM is 540 billion parameters, where “parameters” refers to the pieces of the language model learned from training data. A 2020 study estimated that developing a text-generating model with 1.5 billion parameters might cost USD 1.6 million.

For instance, it took three months and 384 Nvidia A100 GPUs to train the 176 billion-parameter open-source model Bloom; where a single A100 GPU costs thousands of dollars.

In a LinkedIn post regarding PaLM RLHF, Sebastian Raschka, an AI researcher, notes that scaling up the required dev workflows could prove difficult. “Even if someone provides you with 500 GPUs to train this model, you still need to have to deal with infrastructure and have a software framework that can handle that. It’s obviously possible, but it’s a big effort now (of course, we are developing frameworks to make that simpler, but it’s still not trivial, yet)”, he said.

PaLM RLHF might not replace ChatGPT right now — unless a well-funded venture (or person) bothers to train and make it accessible to the public.

In other news, several additional projects to copy ChatGPT are developing quickly, including one run by the research team CarperAI. The first ChatGPT-like AI model that has been trained with human feedback will be made available by CarperAI in collaboration with the open AI research organisation EleutherAI, the firms Scale AI and Hugging Face.