Highlight:

  • The new platform will help enterprises accomplish major ML tasks 10 times faster on multiple edge devices.
  • The solution is designed to allow enterprises to utilize small, scalable, and robust machine learning tools to make edge devices capable of executing AI inference tasks.

OmniML, a Machine Learning (ML) model development start-up, announced its launch with a seed funding of USD 10 million to provide enterprises with an Artificial Intelligence (AI) deployment platform for edge devices. The GGV’s capital-led funding round will allow OmniML to expand its ML team and enhance its software development.

OmniML’s solution allows users to develop, improve, and deploy robust ML models to hardware devices at a network edge. It’s been designed to allow enterprises to sutilize small, scalable, and effective ML models to make edge devices capable of performing AI inference tasks.

According to the organization, this approach will help accomplish big ML tasks 10 times faster on various edge devices. As a result, enterprises and their technical decision-makers will get a potential solution for the deployment of AI applications, like computer vision at the networks edge.

ML Models are Pushing AI to the Edge 

Researchers anticipate enterprises to invest more than USD 434 million in AI in 2022 to get better insights. There is an increasing demand for ML solutions that can power AI solutions to run at the network’s edge without overloading the hardware. Many AI solutions today are not lightweight enough to perform on edge devices, which is a challenge.

“Today’s AI is too big, as modern deep learning requires a massive amount of computational resources, carbon footprint, and engineering efforts. This makes AI on edge devices extremely difficult because of the limited hardware resources, the power budget, and deployment challenges,” said Di Wu, co-founder and CEO of OmniML.

“The fundamental cause of the problem is the mismatch between AI models and hardware, and OmniML is solving it from the root by adapting the algorithms for edge hardware. This is done by improving the efficiency of a neural network using a combination of model compression, neural architecture rebalances, and new design primitives,” Wu added.

This approach builds on Song Han’s research. He is an assistant professor of electrical engineering and computer science at MIT. Han utilizes a “deep compression” technique, which minimizes the size of the neural network without compromising on accuracy. Hence, the solution can improve ML models for various chips and devices at the network edge.

The Need for Scalable Edge AI

As per the researchers, the global market size of edge AI software was valued at approximately USD 590 million in 2020, and they anticipate it to reach USD 1,835 million by 2026 as 5G networks develop and the number of connected devices to the modern network sees a spike.

Decentralized technologies are in high demand by enterprises, and, hence, many vendors are designing solutions to make AI inference tasks viable at the network’s edge.

OctoML is one such vendor with a platform developed to allow users to deploy and automatically optimize ML models; it raised USD 85 million in 2021 as part of a Series C funding round. Edge Impulse is another competitor vendor that has designed low code development platform primarily to assist users to create, test, and deploy ML models to the edge devices; they raised USD 34 million as a part of a Series B funding round.

The above two vendors are very successful; the co-founder Wu claims their OmniML sets apart from the competitors because it designs efficient algorithms from the ground up rather than just compressing them.

Experts’ view

“All existing solutions focus on downstream optimizations, like quantization, pruning, compiler optimizations, etc. Yet none of them is trying to solve the fundamental problems: existing AI models are not designed for constrained edge hardware. By focusing on the fundamental algorithms, our solution provides maximum scalability. It truly works for any model, hardware, and task,” said Di Wu, co-founder and CEO of OmniML.