Highlights:

  • According to a recent survey by Forrester Consulting, 50% of data decision-makers identified the application of governance policies within AI and machine learning as the primary challenge hindering widespread usage.
  • JFrog’s SageMaker integration has been purposefully designed to tackle these concerns by incorporating DevSecOps best practices into the machine learning model management.

JFrog Ltd., a software supply chain company, has unveiled a new integration with Amazon SageMaker. This integration facilitates efficient collaboration between developers and data scientists in building, training, and deploying machine learning models.

SageMaker is a cloud-based machine-learning platform offering functionalities for creating, training, and deploying machine-learning models in the cloud. Developers leverage SageMaker to deploy these models in cloud environments, embedded systems, and edge devices. The recent collaboration with JFrog’s Artifactory enables the seamless integration of models into a modern software development life cycle. This integration ensures that each model becomes immutable, traceable, and secure and undergoes validation as it progresses toward release maturity.

The integration has been specifically crafted to address concerns related to artificial intelligence and machine learning, providing a comprehensive solution to associated challenges. According to a recent survey by Forrester Consulting, 50% of data decision-makers identified the application of governance policies within AI and machine learning as the primary challenge hindering widespread usage. Additionally, 45% cited data and model security as a significant gating factor in adopting these technologies.

JFrog’s SageMaker integration has been purposefully designed to tackle these concerns by incorporating DevSecOps best practices into the machine learning model management. This integration empowers developers and data scientists to scale, secure, and advance their machine-learning projects while ensuring adherence to security protocols, regulatory standards, and organizational compliance.

Kelly Hartman, JFrog’s Senior Vice President of global channels and partnerships, said, “As more companies begin managing big data in the cloud, DevOps team leaders are asking how they can scale data science and ML capabilities to accelerate software delivery without introducing risk and complexity. Working with AWS, we’ve been able to design a workflow that indoctrinates DevSecOps best practices to ML model development in the cloud, delivering flexibility, speed, security, and peace of mind.”

The integration’s key features bring machine learning seamlessly into the standard software development and production lifecycles. This ensures an elevated level of protection against unauthorized deletion or modification of models, aligning machine learning practices with established software development standards. This integration facilitates a more streamlined model development, training, securing, and deployment approach.

Additionally, the integration offers capabilities to identify and thwart the use of malicious models within the organization. This enhances security measures and provides tools for scanning model licenses, ensuring compliance with company policies and regulatory requirements.

Integration supports storing internally developed and augmented models to enhance transparency and control. This includes robust access controls and a comprehensive versioning history for meticulous tracking and management. The integration simplifies models’ bundling and distribution process as part of regular software releases. This alignment brings machine learning development closer to traditional software deployment processes.

In conjunction with the SageMaker integration, JFrog has introduced new versioning capabilities for its ML Model Management solution. These capabilities aid in incorporating model development seamlessly into an organization’s secure and compliant Software Development Life Cycle (SDLC). The newly introduced versioning capabilities enhance transparency regarding each model version. This empowers developers, DevOps teams, and data scientists to verify that the correct version is utilized in the appropriate context, ensuring both accuracy and security in the process.