SAN FRANCISCO: Cisco Systems enters the AI chip wars with the Silicon One G300, a 102.4 terabits-per-second switching processor built to handle the data-movement demands of massive AI clusters. Unveiled this week at Cisco Live EMEA in Amsterdam, the chip takes direct aim at Nvidia and Broadcom as competition for the $600 billion AI infrastructure market intensifies.
Manufactured on TSMC’s 3-nanometer process, the G300 introduces what Cisco calls Intelligent Collective Networking, a combination of shared packet buffering, path-based load balancing, and real-time telemetry designed to prevent the traffic jams that stall AI training runs. The company says the chip completes AI computing jobs 28% faster by automatically rerouting data around congested links within microseconds, and improves overall network utilization by 33%.
The G300 powers two new switching platforms, the Nexus N9000 and Cisco 8000, both available in fully liquid-cooled configurations that Cisco claims improve energy efficiency by nearly 70%. The chip also supports up to 512 ports and 1.6 terabit Ethernet connections, giving hyperscalers and enterprise operators room to grow without replacing hardware as workloads evolve.
Cisco plans to put the G300 on sale in the second half of 2026. The move signals a strategic pivot toward the AI networking layer just as Nvidia embeds its own networking silicon deeper into its rack-scale systems and Broadcom expands its Tomahawk series. For enterprises and cloud providers building out AI infrastructure, a credible third competitor in switching silicon could reshape both pricing and procurement strategies.
