Cloudflare, Akamai Join the AI Effort
The world’s leading content delivery networks (CDNs) are adding features to their edge services in a major bid for a slice of generative AI business.
Cloudflare (NYSE: NET) this week, for instance, announced that it’s outfitted its global network edge locations (in over 300 cities worldwide) with coveted GPUs and Ethernet switches from NVIDIA (Nasdaq: NVDA) along with NVIDIA’s software for processing generative AI inference – the NVIDIA Triton Inference server and NVIDIA TensorRT-LLM.
The focus on inference versus large language model (LLM) training isn’t accidental. While training involves creating the models on which generative AI applications are grown, inference involves the extra contextual learning and prompting that tailor the models to fit the specific requirements of an enterprise. Since an organization’s data can be drawn on in inference work, it’s beneficial to keep that work local for security, Cloudflare says.
“AI inference on a network is going to be the sweet spot for many businesses: private data stays close to wherever users physically are, while still being extremely cost-effective to run because it’s nearby,” said Matthew Prince, CEO and co-founder, Cloudflare, in this week's press release. And earlier, on Cloudflare’s Q2 2023 earnings conference call August 3, he said: “[We] believe that we are uniquely positioned to win the inference market, which we believe will be substantially larger than the training market.”
Growing and Strengthening the Edge
To access the rest of this article, you need a Futuriom CLOUD TRACKER PRO subscription — see below.
Access CLOUD TRACKER PRO
|
CLOUD TRACKER PRO Subscribers — Sign In |