• Founded in 2014 in the UK, Prolific emerged to address the need for affordable and accessible crowdsourced participants for online academic research, as existing tools were costly and inconvenient.
  • With Prolific’s platform, AI model builders can leverage their participant network for reinforcement learning by human feedback (RLHF).

Prolific, a company established to gather verified data from human subjects, announced recently that it has raised USD 32 million in a Series A funding round co-led by Partech and Oxford Science Enterprises to expand its services and use human insights for training and enhancing artificial intelligence models.

There has been a growing demand for access to verifiable data from human participants due to the enormous popularity of generative AI chatbots like OpenAI LP’s ChatGPT and Google LLC’s Bard that can understand natural speech and respond conversationally. AI models must also be tested to ensure they don’t deviate from the path or aren’t acting in a harmful or toxic manner.

As existing tools were expensive and difficult to use, Prolific was established in the UK in 2014 to provide authentic crowdsourced participants for online academic research. Another issue was that not everyone was who they claimed to be: Far too many were automated programs or bots pretending to be people to benefit from the situation.

The company now has a network of more than 120,000 active participants in 38 countries who have been thoroughly screened and vetted and can offer perspectives for developing and testing AI models from various backgrounds. A minimum of USD eight per hour is paid to the human participants for their time.

Co-founder and Chief Executive of Prolific, Phelim Bradley, said, “AI represents one of the biggest leaps forward in technology in recent years, and our unique approach to data sourcing from humans positions us to make these systems more accountable and less biased.”

Reinforcement learning by human feedback, or RLHF, is a process that Prolific’s platform can enable by utilizing its network of human participants. By employing this technique, AI model outputs can be reviewed by humans, who can then train the model to become less harmful and error-prone. This procedure is crucial in preventing “hallucinations,” which occur when AI chatbots confidently present untrue information.

The conversational outputs of AI models will be more authentic and natural if they have access to native language speakers. A larger pool of individuals from various backgrounds and demographics who can annotate and classify the data used to create AI models also aids in lowering the likelihood of bias and harmful AI responses. For instance, if an AI model is trained on truly representative population samples, it is less likely to respond in a way that is racially or culturally insensitive.

Prolific’s platform can assist with easy auditing and transparent data sourcing as governments and businesses work to bring AI into compliance with copyright law. This has grown in significance as the European Parliament drafting laws require transparency regarding using copyrighted works as training data for AI models.

Bradley said, “The funding we have secured will fuel our growth in the AI space, especially in the U.S., bolstering our commitment to human-guided AI development during this pivotal moment in the technology’s progression.”