Highlights:

  • The guidelines are an interesting compilation of general recommendations that clarify current best practices for creating AI-based systems and reiterate long-held general security principles, such as controlling the accrued technological debt during a system’s lifecycle.
  • The guidelines advise developers to provide suitable access controls to all AI components, including training and processing data pipelines, as part of the recommended deployment protection methods.

The Guidelines for Secure AI System Development were jointly announced by the National Cyber Security Center of the United Kingdom, several dozen government cyber organizations, and AI vendors.

Within the AI system development lifecycle, the guidelines are divided into four primary areas: secure design, secure development, secure deployment, and secure operation and maintenance. These include threat modeling, supply chain security, safeguarding artificial intelligence (AI) and model infrastructure, and maintaining AI models.

The guidelines are an interesting compilation of general recommendations that clarify current best practices for creating AI-based systems and reiterate long-held general security principles, such as controlling the accrued technical debt during a system’s lifecycle. Alejandro Mayorkas, Homeland Security’s Secretary, said, “We are at an inflection point in the development of artificial intelligence, which may well be the most consequential technology of our time. Cybersecurity is key to building AI systems that are safe, secure, and trustworthy.” He also mentioned it as a historic deal and considered it worth acknowledging.

Ron Reiter, Co-founder and Chief Technology Officer of Sentra, reported, “AI has opened Pandora’s box with unparalleled power of data mining and natural language understanding, which humanity has never dealt with before. This opens up dozens of new risks and novel attack vectors that the world must deal with. This task is an overwhelming undertaking, and without following best data security practices, organizations risk the myriad of consequences that come with cutting corners in building an AI model.”

President Biden’s October Executive Order, the CISA Roadmap for Artificial Intelligence, and other government initiatives, such as Singapore’s AI Governance Testing Framework and Software toolkit called AI Verify and Europe’s Multilayer Framework for Good Cybersecurity Practices for AI, are all built upon by the guidelines. Other links at the end of the document point to more AI security screeds that have primarily been published in the last year and are worth a look.

The report goes to great lengths to outline the distinct problems that artificial intelligence brings to supply chain security. This entails being aware of the source of every model component, such as the building blocks and training data. It has been recommended in the document that AI system developers should “ensure their libraries have controls that prevent the system loading untrusted models without immediately exposing themselves to arbitrary code execution.”

Another suggestion is to have “appropriate checks and sanitization of data and inputs; this includes when incorporating user feedback or continuous learning data into corporate models, recognizing that training data defines system behavior.” To accurately assess risks and look for unexpected user behaviors, the authors advise developers to adopt a longer and more comprehensive perspective of the processes included in their models. These strategies should be incorporated into the entire risk management processes and tooling. That is a bold claim, considering that many AI threat management technologies are still in their infancy.

The guidelines advise developers to provide suitable access controls to all AI components, including training and processing data pipelines, as part of the recommended deployment protection methods. The authors suggest a regular risk-oriented approach: “Attackers may be able to reconstruct the functionality of a model or the data it was trained on by accessing a model directly, by acquiring model weights, or indirectly by querying the model via an application or service. Attackers may also tamper with models, data or prompts during or after training, rendering the output untrustworthy.”

For instance, the document emphasized a phenomenon called “adversarial machine learning,” which is known as a critical AI security concern and defined as “the strategic exploitation of fundamental vulnerabilities inherent in machine learning components.” The concern is that by manipulating these elements, threat actors can probably interrupt or deceive AI systems, leading to compromised functions and erroneous results.

The deal comes after the European Union’s AI Act, which was unveiled in June and outlawed the use of some AI technologies, including predictive policing and biometric surveillance. Additionally, that law designated certain AI systems as “high risk” that could affect elections, safety rights, and human health.

President of the United States, Joe Biden, signed an executive order in October that attempts to regulate the growth of AI by mandating that creators of the most potent models provide safety results and other vital data to the government.

It is noteworthy that China did not sign the new deal. According to Reuters, China is a “powerhouse of AI development,” and the United States has imposed sanctions on it to restrict its access to the most cutting-edge silicon needed to power AI models.

The EU seems to be ahead of the US regarding AI regulation. In addition to the AI Act, parliamentarians in France, Germany, and Italy recently reached a consensus on AI regulation, indicating that they favor creating foundational AI models along with “mandatory self-regulation through codes of conduct.”

A statement emphasizing the necessity of addressing the possible hazards posed by AI was signed recently by the United States, the United Kingdom, China, and 25 other nations. It described some of the possible problems that advanced AI models can present as well as possible solutions, such as expanding the scope of already-existing AI safety programs.

Kevin Surace, Chairperson of startup vendor Token, said, “The security of AI systems is paramount, and this is an important and critical step to codify this thinking. The guidelines go further to address bias in models, which is an ongoing challenge, as well as methods for consumers to identify AI-generated materials.”

The document, which has 20 pages, provides the most basic sketch of what enterprise technology managers should undertake to guarantee the safe development of generative AI models and techniques. Nevertheless, it’s always a good idea to keep the fundamentals in mind, and the document might be used to create a custom security playbook and instruct individuals unfamiliar with the tools and methods used by AI developers.