Links on Android Authority may earn us a commission. Learn more.
Qualcomm brings its AI expertise to servers with the Cloud AI 100 platform
Along with its latest batch of new mid-range smartphone mobile-platforms, Qualcomm has made an even bigger announcement at its AI Day in San Francisco. The mobile chip giant is making another play for servers, after abandoning its Centriq range back in 2018. This time, the company is leveraging its technologies in the field of AI to get its foot in the cloud computing door. The first chip us dubbed the Qualcomm Cloud AI 100 platform.
The Cloud AI 100 platform isn’t a mobile chip repackage, it’s a ground-up 7nm design for AI inference tasks, rather than training. That means that the chip will be crunching the numbers passing through neural networks rather than being used to train them. This appears to put Qualcomm in direct competition with Nvidia’s Tesla T4 series and Google Edge TPU inference chips designed for servers and cloud computing. Just like its competitors, Qualcomm has realized that the most efficient AI processing isn’t done on CPUs, GPUs, and FGPAs, but requires a dedicated AI accelerator.
In terms of performance, Qualcomm estimates a greater than 50x peak AI performance boost over the capabilities of its Snapdragon 855. The Snapdragon 855 offers around 7TOPS of performance, which suggests the Cloud AI 100 is in the range of 350TOPS. That will certainly give Nvidia’s T4 a run for its money. Qualcomm believes it has the edge on performance per watt as well, boasting a 10x improvement compared to the industry’s most advanced AI inference solutions deployed today. When you’re crunching big numbers, power efficiency can save companies huge sums in electricity bills. In addition, Qualcomm wants to leverage its expertise in signal processing and 5G, so its cloud platform can work at the edge in future very low latency networks.
When asked, Qualcomm wouldn’t comment on whether the architecture used in the Cloud AI 100 was proprietary or licensed from elsewhere. I wonder whether this is the first appearance of Arm’s Trillium AI architecture, which is also designed specifically for inference workloads on a low power budget. But enough speculation, Qualcomm says that it will be sharing more information about its AI architecture in the future.
Finally, to support developers Qualcomm’s Cloud AI 100 works with most, if not all, of the industry-leading software stacks. There’s support for Caffe, Keras, mxnet, TensorFlow, PaddlePaddle, and Cognitive Toolkit frameworks, along with Glow, OnnX, and XLA runtimes.
The Qualcomm Cloud AI 100 is expected to begin sampling to customers in