When it comes to training deep learning models, GPUs are the go-to option solely due to their bandwidth optimization and parallel processors coupled with all that VRAM. Major cloud players such as AWS, Microsoft Azure, Google Cloud Platform want their fair share of AI-powered Inference as a Service which they provide to consumers.
In terms of selling GPUs to these cloud platforms, no company even comes close to Nvidia with its massive share of 97.4% (as of May 2019) as opposed to 1% for AMD in the top 4 cloud offerings consisting of Alibaba Cloud and the above three. Nvidia uses their Tesla lineup of GPUs for cloud instances with offerings such as Tesla K80, P4, T4, P100 and V100 GPUs for the Google Cloud Platform.
As far as FPGAs are concerned, AWS and Alibaba Cloud give an option to opt for FPGA instances for deployment. Xilinx being a big player in FPGA market, and with the announcement of its new Versal FPGA lineup, the company looks to target Intel in the cloud FPGA market. Xilinx is also moving towards dedicated logic for configurable on-chip interconnects for AI tasks.
Furthermore, Intel also announced its Agilex FPGAs for cloud AI instances. However, Xilinx already owns major share in cloud FPGA market. On the GPU side of things, it will be long before any company even comes close to Nvidia’s cloud market share ownership with Xiilinx taking the second spot.
For more tech news, click here.