SAN FRANCISCO, April 10 (Xinhua) -- Qualcomm on Tuesday debuted its first artificial intelligence (AI) accelerator for cloud computing to meet the increasing demand for AI inference in addressing peak performance at future datacenters.
The cloud accelerator, the Cloud AI 100, which was unveiled at Qualcomm 2019 AI Day in downtown San Francisco, is built on seven-nanometer process node and will facilitate distributed intelligence from the cloud to the client edge and all the points connecting them.
"Our all-new Qualcomm Cloud AI 100 accelerator will significantly raise the bar for the AI inference processing relative to any combination of CPUs, GPUs, and/or FPGAs used in today's data centers," said Keith Kressin, senior vice president of product management at Qualcomm.
"There is definitely a paradigm shift, the architectural shift going on in the infrastructure," Kressin said at a presentation on how Qualcomm invested in Cloud AI accelerator technology.
Several years ago when AI inferencing and training were just beginning, this was done on traditional CPUs, and some over time migrated to FPGAs and GPUs, which have much higher levels of parallel processing, he said.
"But now the market has gotten to the point that power is so important and the amount of processing is so high ... that the industry can build purpose-built AI silicon," he added.
According to Qualcomm, the Cloud AI 100 will deliver more than 350 tera operations per second (TOPS) at peak performance and over 10x performance per watt over the industry's most advanced AI inference solutions deployed today.
The new chipset is about 3-50 times faster than the Snapdragon 855 and Snapdragon 820 processors at its peak performance, said the company.
Qualcomm said the Cloud AI 100 is expected to begin sampling to customers in the second half of 2019, and its production could take place in 2020.