World Premiere! 2024 Networking Infra for AI Report

A Icloud

By: Mary Jander


Artificial intelligence (AI) has taken the technology world by storm. And generative AI (GenAI), the technology that generates text, images, sounds, and more from natural language input, has started a revolution in the way data is consumed, processed, and offered in applications.

But to produce enterprise AI applications that will advance all kinds of industries requires a new kind of information technology (IT) infrastructure, one that is faster, more scalable, and more reliable than longstanding centralized client-server networks.

Arguably the most important piece of the new IT infrastructure for AI is networking. Whether connecting chips within supercomputers, interconnecting servers in AI clusters, or linking those clusters to the network edge, existing technologies must be improved and new approaches created to sustain the performance demanded by AI applications. What’s called for is networking that is decentralized, disaggregated, and accelerated.

Emerging Networking for AI Architectures

In a Futuriom first, we explore the world of networking for AI in a new report, “Networking Infrastructure for Artificial Intelligence.” This report is intended to serve as a basic outline of the developments in networking infrastructure required for enterprise AI processing at all levels. While not exhaustive, it is meant as a high-level overview into the new networking technologies and approaches that are producing today’s emerging AI infrastructure.

One thing to note: We distinguish in this report between networking for AI and AI for networking. The former term pertains to the elements of a network geared to AI processing, while the latter, not the subject of this report, refers to using AI to control and augment networking itself – sometimes referred to as AIOps. That will be covered in a later report.

Download the report now!

AI Infra Trends: What Is In This Report?

In this report, we cover the new architectures of Networking for AI, incumbent vendors, startups, and challengers. It includes a wide range of companies in the AI infrastructure ecosystem, including content delivery networks (CDNs), cloud providers, and network equipment vendors.

The backdrop of the industry includes fast-moving architectures to support the needs of AI clusters, which connect many GPUs together. This new market has become a focal point for many technologists. Currently, InfiniBand, a technology acquired by NVIDIA with its purchase of Mellanox in 2020, enjoys an estimated 80% to 90% presence connecting clusters in most AI networks, thanks to NVIDIA’s dominant market share and InfiniBand’s overall performance compared to Ethernet.

Still, a growing roster of proponents are campaigning to make Ethernet a reasonable alternative to InfiniBand. The logic is that InfiniBand is more expensive than Ethernet and relies on NVIDIA hardware and software—locking customers into the market leader. And InfiniBand isn’t as widely understood or supported as Ethernet.

This report includes:

  • Why Do We Need Networking Infra for AI?
  • Networking for AI: Defining the Context
  • Basic Building Blocks of Networking for AI
  • Market Trends in Networking for AI
  • Networking for AI at the Edge
  • Companies to Watch profiles (22)

Companies included in this report: Akamai, AMD, Arista, Arrcus, Astera Labs, Aviatrix, Aviz, AWS, Broadcom, Ciena, Cisco, Cloudflare, CoreWeave, DriveNets, Enfabrica, F5, Google Cloud, Graphiant, Hedgehog, Infinera, Juniper Networks, Lambda Labs, Meta, Microsoft, NVIDIA, Prosimo, Vapor IO, ZEDEDA, Zentera.

Download the report now!