Cloud Tracker Pro

UALink Offers Fresh Options for AI Networking

Bluedatacenter

By: Mary Jander


A major push toward alternatives to NVIDIA networking for AI emerged on April 8 when the Ultra Accelerator Link (UALink) Consortium published its first specification. The consortium, led by an impressive list of members—including AMD, Apple, Astera Labs, AWS, Broadcom, Cisco, Enfabrica, Google, HPE, Intel, Juniper Networks, Meta, Microsoft, and Synopsys, to name just a few—is offering the first open-source alternative to NVIDIA’s ubiquitous NVLink to connect GPUs and CPUs in AI clusters.

The UALink 200G 1.0 Specification defines a low-latency interconnect for GPUs in back-end networks that supports 200-Gb/s bidirectional data rates for 1, 2 or 4 lanes connected to up to 1,024 accelerators in a pod. Hence, maximum bidirectional bandwidth is 800 Gb/s.

How does this compare to NVLink? That depends on the version of NVLink, but the fifth-generation Blackwell-compatible NVLink supports up to 18 connections at 100 Gb/s apiece, for a total bandwidth of 1.8 Tb/s. Adding an NVLink Switch supports “all to all” GPU-to-GPU data rates to 1.8 Tb/s in and between racks. When deployed with a GB300 NVL72 system, NVIDIA NVLink Switch enables 130 Tb/s of GPU bandwidth for up to 576 GPUs.

UALink Claims to Fame

While NVLink compares favorably against UALink in speeds, there are several reasons why proponents of UALink say it could eventually be a threat to NVIDIA’s dominance in “scale up.” For one thing, the consortium boasts of high performance and reliability with low power consumption.

To access the rest of this article, you need a Futuriom CLOUD TRACKER PRO subscription — see below.


Access CLOUD TRACKER PRO


Subscribe for Access
Activate your CLOUD TRACKER PRO Subscription,
$48/month or $528/year per individual.
Click Here  to  

CLOUD TRACKER PRO Subscribers — Sign In
Subscribers please Click Here to Login.