devxlogo

Cisco Debuts 102.4 Tbps AI Switch Silicon

cisco ai switch silicon debut
cisco ai switch silicon debut

Cisco has introduced a 102.4 Tbps switching chip, the Silicon One G300, to feed growing demand for AI-scale networks in data centers. The company said the chip will anchor new Cisco N9000 and Cisco 8000 systems, featuring liquid cooling and high-density optics, and updated software aimed at simplifying operations for enterprise AI deployments on-premises and in the cloud.

The launch arrives as data centers race to link thousands of GPUs with lower latency and higher throughput. Cisco positioned the new silicon and systems as a way to increase efficiency and protect large infrastructure investments.

Why This Matters Now

Training and inference clusters have surged in size and power draw. Networking has become a bottleneck when scaling across racks and sites. Vendors are responding with 800G optics, improved congestion control, and denser switch silicon. A 102.4 Tbps class device represents the current top tier for Ethernet switching bandwidth, doubling the capacity of prior 51.2 Tbps generations.

Cisco aims to tie hardware and software into one stack for AI fabrics. The company also updated its Nexus One software to reduce operational steps and help teams run distributed AI workloads with consistent policies across environments.

What Cisco Is Shipping

The Silicon One G300 targets large-scale AI clusters that demand high radix and non-blocking networks. It will power new N9000 and 8000 platforms, which include liquid cooling options and high-density optical modules to cut energy use per bit and shrink rack footprints.

  • Switching capacity: 102.4 Tbps
  • Systems: New Cisco N9000 and Cisco 8000
  • Cooling: Liquid cooling support
  • Optics: High-density modules for AI fabrics
  • Operations: Enhanced Nexus One for on-prem and cloud
See also  Wireless Home Theater Finally Sounds Like Cinema

Jeetu Patel, Cisco’s president and chief product officer, framed the move as part of a full-stack strategy spanning chips, systems, and software.

“We are spearheading performance, manageability, and security in AI networking by innovating across the full stack – from silicon to systems and software,” said Jeetu Patel. “We’re building the foundation for”

Competition and Market Context

Cisco’s push arrives amid growing pressure from both Ethernet and proprietary interconnects. Broadcom and others have announced 51.2 Tbps and 102.4 Tbps class merchant silicon. Nvidia is promoting Spectrum-based Ethernet for AI, while also advancing NVLink for GPU-to-GPU connectivity. Many buyers are weighing Ethernet’s openness and cost against custom fabrics that promise tight coupling with accelerators.

Analysts expect rapid uptake of 800G optics in 2024 and 2025 as clusters expand. Liquid cooling adoption is also rising due to higher rack densities and stricter energy targets. By integrating liquid cooling options and dense optics, Cisco is aiming for lower total cost of ownership in large-scale builds.

Operational Hurdles Remain

Performance alone will not solve AI deployment pain points. Operators face challenges with cabling, optical supply, telemetry, and scheduling across mixed vendors. Any single-vendor stack can raise questions about lock-in and interoperability. Buyers will look for open standards, clear visibility, and flexible routing and congestion control.

Cisco said Nexus One enhancements are designed to reduce complexity for enterprises that run hybrid or multicloud AI pipelines. The details on automation, fabric validation, and failure handling will be key for operators planning multi-thousand GPU clusters.

What It Means for Data Centers

The shift to 102.4 Tbps switches could cut the number of tiers in a spine-leaf network and reduce oversubscription. That can lower latency and improve training throughput. Combined with dense optics, operators may pack more bandwidth per rack while managing power with liquid cooling.

See also  Prince Harry Testifies In Mail Lawsuit

For enterprises, the main question is whether these gains translate to faster time to train and lower cost per model. Success will depend on software maturity, traffic engineering, and how well networks handle bursty, collective-heavy AI workloads.

What to Watch Next

Independent benchmarks comparing end-to-end training times across 51.2 Tbps and 102.4 Tbps fabrics will matter. So will evidence that liquid cooling and high-density optics meet reliability targets at scale. Interoperability with third-party optics and tools will remain under scrutiny.

Procurement teams will evaluate total cost, including optics, power, cooling, and operational staffing. If Cisco can prove lower cost per bit and smoother operations, it could gain share in AI buildouts against merchant-silicon rivals and accelerator-led fabrics.

The announcement signals another step in the rapid build of AI-ready networks. The next phase will test whether higher bandwidth and tighter integration deliver measurable gains in model performance and budget efficiency for large clusters.

deanna_ritchie
Managing Editor at DevX

Deanna Ritchie is a managing editor at DevX. She has a degree in English Literature. She has written 2000+ articles on getting out of debt and mastering your finances. She has edited over 60,000 articles in her life. She has a passion for helping writers inspire others through their words. Deanna has also been an editor at Entrepreneur Magazine and ReadWrite.

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.