Highlights:

  • The AI Test Kitchen is an internal Google initiative that was unveiled at this year’s I/O developer conference and will feature rotating demos of unique, cutting-edge AI technologies.
  • Google states that within AI Test Kitchen, its systems will attempt to automatically recognize and filter out unacceptable terms or phrases that may be sexually explicit, hateful, offensive, violent, or unlawful, or reveal personal information.a

Google announced the release of an Android app called AI Test Kitchen that allows people to try new AI-powered technologies from Google Labs before they are put into widespread usage. With the recent launch, AI Test Kitchen will begin rolling out to select groups around the United States. Those interested can sign up by filling out an online form.

As announced at Google’s I/O developer conference earlier this year, AI Test Kitchen is an internal Google initiative that will serve rotating demos of unique, cutting-edge AI technologies. The products aren’t complete, but they do provide users with a taste of Google’s latest developments and allow the search giant to see how people interact with them in action.

The first set of demos in AI Test Kitchen explores the capabilities of the latest version of LaMDA (Language Model for Dialogue Applications), a language model that queries the web to give answers in a human-like way. For instance, if you give LaMDA the name of a location, it will suggest routes across that area, and if you tell it your aim, it will divide that larger task into smaller ones.

To reduce potential problems with systems like LaMDA, like biases and poisonous outputs, Google says it has added “multiple layers” of security to the AI Test Kitchen. Even the most advanced chatbots of today – as recently demonstrated by Meta’s BlenderBot 3.0 – may swiftly fly off the tracks when prompted with language, exploring conspiracy theories and objectionable content.

Google states that within AI Test Kitchen, systems will attempt to automatically recognize and filter out unacceptable terms or phrases that may be sexually explicit, hateful, offensive, violent, unlawful, or reveal personal information. However, the company stresses that inappropriate language may still make it through on rare occasions.

“As AI technologies continue to advance, they have the potential to unlock new experiences that support more natural human-computer interactions,” Tris Warkentin, a Google product manager, and Josh Woodward, the company’s director of product management, posted in a blog entry. He wrote, “We’re at a point where external feedback is the next, most helpful step to improve LaMDA. When you rate each LaMDA reply as nice, offensive, off-topic or untrue, we’ll use this data — which is not linked to your Google account — to improve and develop our future products.”

AI Test Kitchen is part of a broader trend among tech titans to pilot AI technology before release. No doubt informed by snafus like Microsoft’s toxicity-spewing Tay chatbot, Google, Meta, OpenAI, and others have opted to test AI systems in small groups to ensure they’re functioning as expected and fine-tuning their behavior.

To cite an example, several years ago, OpenAI published GPT-3, a language-generating system in closed beta before making it widely available. Initially, GitHub limited access to Copilot, the code-generation system built with OpenAI, to a few developers before making it generally available.

Top tech players are now mindful of the negative press that AI gone wrong can generate. By exposing innovative AI systems to external groups and adding extensive disclaimers, the technique appears to promote the systems’ strengths while minimizing harmful components. Even before AI Test Kitchen’s inauguration, LaMDA made news for all the wrong reasons, but Silicon Valley is confident it will.