Networking companies are competing to create chips that can handle AI and machine learning tasks. For instance, Cisco's Silicon One G200 and G202 ASICs are challenging offerings from Broadcom, NVIDIA, and Marvell. The demand for AI technology is growing rapidly, with global spending on AI predicted to reach $154 billion in 2023 and at least $300 billion by 2026. Additionally, by 2027, nearly 1 in 5 Ethernet switch ports purchased by data centers will be for AI/ML and accelerated computing, as per a report by 650 Group.
Cisco's Silicon One G200 and G202 ASICs perform AI and ML tasks with 40% fewer switches at 51.2Tbps. They enable a 32K 400G GPUs AI/ML cluster on a two-layer network with 50% less optics and 33% fewer networking layers. These chips offer unified routing and switching, providing a converged architecture for routing, switching, and AI/ML networks. The advanced load balancing and ultralow latency make them well-suited for handling AI/ML workloads. Enhanced Ethernet capabilities further improve performance, reducing job completion time by 1.57x, and Cisco says the G200 and G202 also incorporate load balancing, better fault isolation, and a fully shared buffer to support optical performance for AI/ML workloads.
According to Chopra, networking vendors are introducing networking chips with increased bandwidth and radix, allowing them to handle AI tasks by connecting to a larger number of devices. Moreover, they are facilitating seamless communication between GPUs, eliminating bottlenecks and enhancing the performance of AI/ML workloads.