Arrcus Lands $30 Million with Help from NVIDIA

Moneyhand

By: Mary Jander


In a move that highlights strong momentum in the AI networking space, hyperscale multicloud networking (MCN) software vendor Arrcus has announced $30 million in new funding from a team of investors that now includes NVIDIA.

The round, which brings Arrcus’ total raised to over $150 million, also included investment from Prosperity7 Ventures, Lightspeed, Hitachi Ventures, Liberty Global, Clear Ventures, and General Catalyst. The money will be used to grow the Arrcus Connected Edge (ACE) platform, management said, including fueling ongoing collaboration with NVIDIA.

“Modern networks are evolving to address customer needs in the era of AI,” stated Kevin Deierling, SVP of networking at NVIDIA, in the press release. “We’re collaborating with Arrcus to provide high-performance, secure and cost-effective data center networking for a variety of accelerated computing applications.”

Notably, the arrangement with NVIDIA is non-exclusive of other vendor partnerships, which Arrcus has cultivated since its founding in 2016 and which include relationships with CoreSite, SoftBank, Broadcom, and Intel.

“ACEing” AI Workloads

As one of the fastest-growing startups in the MCN space, Arrcus is ideally positioned to tap increased demand by enterprises looking to connect a more distributed IT infrastructure for AI workloads. Its Arrcus Connected Edge for AI (ACE-AI) is a flexible multicloud networking solution designed to let large enterprises, colocation providers, and telcos extend their on-prem networks to multicloud environments and to deliver multicloud connectivity as a managed service to enterprises.

Key to ACE-AI is its ability to provide a high-speed IP CLOS leaf/spine fabric supporting RDMA over Converged Ethernet (RoCE) v2. The platform’s distributed, cloud-native network operating system, ArcOS, also works with NVIDIA BlueField DPUs. ACE-AI supports AI/ML workloads that are hardware-agnostic; it supports white-box switches and routers with 400-Gb/s and 800-Gb/s data rates, including systems based on Broadcom chips.

Notably, the combination of Ethernet, Broadcom chips, and NVIDIA DPUs isn’t unique to Arrcus but has become a common architecture for networking for AI. ArcOS is differentiated in part by its disaggregated nature, which allows the NOS help build distributed AI clusters on the fly that are optimized for AI workloads wherever they may reside: private datacenters, public clouds, or the edge.

Networking for AI: The Race Is On

Fresh funding for Arrcus points to ongoing momentum in networking for AI, where an expanding field of companies large and small are shouldering for position. In addition to the likes of incumbents such as Arista, Cisco, and Juniper, newer firms such as Aviatrix, Alkira, and Prosimo are providing MCN platforms that compete with Arrcus. Other players, such as Hedgehog and Aviz, are combining SONiC with BlueField SmartNICs and Ethernet for a different approach to supporting AI at the network edge.

Still, the significance of NVIDIA’s support for Arrcus is clear. It sends a message from the market leader in AI networking about the quality and ongoing potential of Arrcus’ approach. In the race to lead in the networking for AI space, that’s a strong endorsement.