devxlogo

Spear Founder Maps AI Networking Chain

spear founder maps ai networking
spear founder maps ai networking

In a recent television appearance, Spear founder Ivana Delevska outlined how money and technology flow through the AI networking value chain, offering investors a clearer view of who builds what and where bottlenecks may form. Her discussion came on Fox Business’ “Making Money,” as Wall Street tracks heavy spending on data centers and advanced chips.

Delevska, an investor focused on industrial and technology names, spoke as AI training clusters grow and as companies race to connect thousands of processors inside new facilities. The timing matters. The market is trying to size the gains in networking, optics, and power systems that sit behind headline chips.

“Spear founder Ivana Delevska joins ‘Making Money’ to break down her artificial intelligence networking value chain.”

Why Networking Sits at the Center of AI

AI models need fast links to move data between chips. Graphics processors handle math, but links move results across servers. If the network lags, the entire training job slows.

This is pushing demand for high-speed interconnects, advanced switches, and optical parts that carry signals over fiber. It is also changing how data centers are designed, with more power, cooling, and dense racks.

Delevska’s framework highlights a chain that runs from silicon to systems:

  • Compute chips and accelerators that run training and inference.
  • High-bandwidth memory placed near the chips.
  • Networking chips for switching and routing inside the data center.
  • Optical modules and cables to carry signals at long distances.
  • Power and cooling gear sized for dense AI racks.
  • Software stacks that schedule jobs and manage traffic.
See also  Salesforce Rebuilds Slackbot With AI

Winners and Pressure Points

Much investor attention falls on compute chips. But networking has become a key limit on cluster scale. That helps makers of switches, network interface cards, and optical transceivers.

Two standards shape this fight. InfiniBand has long been used for high-performance computing. Ethernet is now catching up with new speeds and features. A wider move to Ethernet could shift share to vendors with scale in that market. If InfiniBand maintains an edge in latency, it could hold in the largest training runs. Buyers may mix both, depending on workload and price.

Optics also sit in the spotlight. Higher speeds mean more demand for 800G and 1.6T modules. Supply must match fast upgrade cycles. Lead times and yields can swing results for suppliers.

Power is another risk. AI racks draw more electricity than standard servers. That raises costs for new builds and can slow deployments where grid capacity is tight. Cooling upgrades, such as liquid systems, add to spend and complexity.

What Spending Signals Say

Cloud giants have guided to higher capital spending tied to AI. That lifts orders across the value chain, not just for chips. Networking orders often lag compute by a quarter as clusters move from plan to build.

Enterprise demand is more uneven. Some firms move quickly with pilot projects and small inference clusters. Others wait for costs to fall and tools to mature. This split favors suppliers that sell to both hyperscale and enterprise buyers.

How Investors Can Frame the Cycle

Delevska’s breakdown suggests a simple lens for tracking the cycle:

  • Compute ramps first, pulling in memory and boards.
  • Networking and optics follow as clusters scale.
  • Power and cooling cap the pace of deployment.
  • Software upgrades extend useful life and improve utilization.
See also  Sandbar Plans Summer Launch Of Stream

Watch for mix shifts. Training clusters need top-speed links. Inference can run on lower-cost gear once models stabilize. That mix shapes margins across suppliers.

Standard shifts also matter. A move to faster Ethernet could broaden the field. If specialized links hold share, fewer vendors may benefit in the near term.

Outlook: Scale, Standards, and Supply

Three issues will define the next phase. First, scale. Model sizes and data sets are growing, which requires larger clusters with tighter networks. Second, standards. The pull between Ethernet and InfiniBand will steer contract wins. Third, supply. Optics, substrates, and high-bandwidth memory must keep up with orders.

Delevska’s focus on the AI networking value chain taps into a practical question for investors: where do dollars land after the headline chip order? For now, the answer points to high-speed links, optical modules, and the power gear that holds it together.

The takeaway is direct. Networking is no longer a footnote in AI. It is a core driver of performance and spend. The next checkpoints will be cloud capex updates, delivery timelines for 800G and 1.6T optics, and signs of easing power constraints at new data centers. Those signals will tell whether the current build wave extends or pauses as costs and supply shift.

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.