Artificial intelligence and machine learning aren’t new concepts to the world of cloud computing, but Nvidia and Amazon are aiming to take it to the next level. Nvidia has announced that mainstream servers designed to run the company’s data science acceleration software are now available; additionally, Amazon will be implementing the technology into its Amazon Web Services (AWS) stack for customers looking to take advantage of accelerated machine learning tasks in the cloud. The new servers feature Nvidia’s T4 GPUs running on the company’s Turing GPU architecture; this raw hardware power combined with Nvidia’s CUDA-X A.I. libraries will enable businesses and organizations to more efficiently handle A.I.-based tasks, machine learning, data analytics, and virtual desktops. Designed for the data center the T4 GPUs draw only 70 watts of power during operation. Companies offering the new servers include Cisco, Dell EMC, Fujitsu, HP Enterprise, Inspur, Lenovo, and Sugon.

For businesses interested in the deployment of Nvidia T4 GPUs on AWS, Amazon announced that the instances will be available through the Elastic Compute Cloud. Through the AWS Marketplace, customers will be able to pair G4 instances with Nvidia’s GPU acceleration software. Additionally, Amazon will be supported by the company’s Elastic Container Service for Kubernetes, allowing for easy scalability depending on the required task. According to Matt Garman, vice president of Compute Services at AWS, the two companies “have worked together for a long time to help customers run compute-intensive A.I. workloads in the cloud and create incredible new A.I. solutions.” The introduction of Nvidia T4 GPUs into the company’s offers will is said to make “it even easier and more cost-effective for customers to accelerate their machine learning inference and graphics-intensive applications.”

Every new T4 server introduced by Cisco, Dell EMC, Fujitsu, HP Enterprise, Inspur, Lenovo, and Sugon will also be Nvidia NGC-Ready validated; this program designed by Nvidia is awarded to servers which demonstrate that they can excel in a full range of different accelerated workloads. Recently, Intel teamed up with Facebook to develop CPUs for machine learning tasks, now Nvidia’s solution ensures that the GPU half of the equation isn’t left behind.