- Vertex, the managed ML platform, enables developers to accelerate the deployment phase and maintenance of the AI models.
- Google has also collaborated with TigerGraph, a graph database maker, which is a crucial part of the company’s ML Workbench offering.
At the recent Google Cloud Applied ML Summit, the IT giant revealed the launch of new product features and technology partnerships that will help users create, deploy, manage, and maintain Machine Learning (ML) models faster and more effectively.
Vertex AI, the company’s AI development environment, was launched last year at the Google I/O’ 21 conference and forms the base for all updates. This managed ML platform enables developers to accelerate the deployment and maintenance of the AI models.
Vertex is beneficial as the platform brings Google Cloud Services for AI together under a unified API and user interface. Platform users like Seagate, Cruise, Ford, etc., have used it to build, train, and deploy ML models in a unified environment, claims Google – moving models from trials to production.
Vertex gives tough competition to other managed AI platforms such as Azure, and Amazon Web Services. Technically, the broad industry is known as MLOps (Machine Learning Operations), a set of best practices for businesses to run AI. Deloitte predicts this market to grow almost 12 times since 2019 and be worth USD 4 billion in 2025.
According to Gartner, with managed services like Vertex joining the race, the cloud market grew by 18.4% in 2021, with cloud predicted to make up 14.2% of the total global IT spending.
More on its capabilities and tools
Google has recently added new capabilities to Vertex AI, which are as follows:
The AI Training Reduction Server – This is a technology that Google believes optimizes the latency and bandwidth of multisystem distributed training on Nvidia GPUs. In the context of ML, “distributed training” means spreading the work of training a system across multiple machines, GPUs, CPUs, or custom chips, which reduces the time and resources needed to complete the training. It supports both PyTorch and Tensorflow.
AI Tabular Workflows – This model aims to offer more customizability to the model creation process. It includes Glassbox and managed AutoML (Automated Machine Learning) pipeline that brings higher customizations to the model creation process. Users get greater flexibility as they can train datasets of more than a terabyte without worrying about accuracy. They can also pick and choose among those parts of the process they want AutoML to handle against those they want to engineer themselves.
Optimized TensorFlow runtime: The optimized version of TensorFlow allows serving models with lower cost and latency than open-source prebuilt TensorFlow serving containers.
Thus, users can take advantage of a few proprietary technologies and model optimization techniques used at Google internally.
Collaboration between Labelbox and Google: Thanks to a partnership between Google and Labelbox, it has become easier to access the former’s data-labeling services for text, audio, images, and video from the Vertex dashboard. Labels are essential in most AI models to learn, and make predictions. Data scientists can use unstructured data to effectively build ML models on Vertex AI.
Google’s update in graph data space
Google introduced a data partnership with Neo4j, a graph database management system that connects graph-based ML models. This is useful for data scientists to explore, analyze, and engineer the Neo4j’s connected data features and then deploy the models within the unified platform of Vertex AI.
With Neo4j Graph Data Science and Vertex AI, it will be easier for data scientists to use graph-based inputs and get to production faster by extracting more predictive power from models. It is most useful in production across use cases like logistics, recommendation engines, fraud and anomaly detection, customer 360, and more.
Google has also collaborated with TigerGraph, a graph database maker, which is a crucial part of the company’s ML Workbench offering.
Google claims that its AI platform, Vertex, needs 80% fewer lines of code to train a model than other competitive platforms. Data scientists and ML engineers of different levels of expertise can implement MLOps to build and manage ML projects effortlessly along the whole development lifecycle.
When Google Vertex AI Product Manager, Surbhi Jain, was asked about the Prediction Service, a new integrated component of Vertex AI; she said, “When users have a trained machine learning model and they are ready to start serving requests from it, that is where it comes into use. The idea is to make it absolutely seamless to enable safety and scalability. We want to make it cost-effective to deploy an ML model in production, irrespective of where the model was trained.”
When asked about built-in security and compliance features, she said, “You can deploy your models in your own secure perimeter. Our PCSA (pre-closure safety analysis) integration control tool has access to your endpoints and your data is protected at all times. Lastly, with private endpoints, Prediction Service introduces less than two milliseconds of overhead latency.”