Highlights:

  • The best practices are included in a new technical guide referred to as the Secure AI Framework, or SAIF, by the search engine giant.
  • The SAIF-comprised best practices are organized into six collections. Each collection concentrates on enhancing a distinct aspect of an organization’s AI security operations.

Google LLC released a collection of recommended practices organizations may use to protect their artificial intelligence models against hackers.

The search giant includes the best practices in a new technical guide called the Secure AI Framework, or SAIF.

Royal Hansen and Phil Venables, Google cybersecurity executives, explained, “A framework across the public and private sectors is essential for making sure that responsible actors safeguard the technology that supports AI advancements, so that when AI models are implemented, they’re secure-by-default. Today marks an important first step.”

According to Google, SAIF can thwart attempts to steal neural network code and training datasets. Additionally, the framework is beneficial for preventing other forms of attacks. According to Google, SAIF makes it more difficult for hackers to control an AI model and lead it to create bad output.

The SAIF-comprised best practices are organized into six collections. Each collection concentrates on enhancing a distinct aspect of an organization’s AI security operations.

The first set of best practices stresses the significance of extending an organization’s cybersecurity controls to its AI systems. According to Google, these controls include the software a business employs to prevent SQL injection attempts. SQL injections are a cyberattack in which hackers capture database data by injecting malicious queries.

Companies implement input sanitization software to prevent malicious queries from reaching the target database to thwart such attacks. Google contends that input sanitization software can also be utilized to filter out malicious AI prompts. Before a prompt is sent to an AI model for processing, the technology can filter out any malicious elements.

The second set of SAIF best practices focuses on detecting threats. According to Google, companies shouldn’t rely solely on their cybersecurity controls to block malicious AI prompts; instead, they should actively monitor for such input. The search engine giant also advises administrators to implement procedures for detecting anomalous AI output.

The third collection of best practices examines how AI can be used to increase the productivity of cybersecurity teams. According to Google’s SAIF handbook, machine learning methods may ease complicated tasks such as deciphering malware code. In addition, the company emphasizes the need for human oversight because AI tools can produce inaccurate results.

SAIF’s remaining three compilations of best practices address a variety of other AI security-related topics.

Google advises cybersecurity teams to conduct regular audits of the AI systems used by employees and to map the associated risks. In addition, the company recommends that cybersecurity professionals standardize their work instruments. Google contends that unified tools for AI breach prevention duties can increase productivity.

Hansen and Venables explained, “As we advance SAIF, we’ll continue to share research and explore methods that help to utilize AI in a secure way. We’re committed to working with governments, industry and academia to share insights and achieve common goals.”