Intel unveils groundbreaking Lunar Lake CPU

Intel unveils groundbreaking Lunar Lake CPU

lunar Lake

Intel has revealed the list prices for its Gaudi 3 and Gaudi 2 AI accelerators. The announcement was made at Computex 2024. Intel’s Gaudi 2 accelerator kit costs $65,000.

It has eight accelerators with a universal baseboard (UBB). A more advanced kit with eight Gaudi 3 accelerators and a UBB costs $125,000. The pricing makes Intel’s Gaudi architecture a cheaper option than Nvidia’s high-priced models.

Intel CEO Pat Gelsinger said, “AI is driving one of the most consequential eras of innovation the industry has ever seen. Intel is one of the only companies in the world innovating across the full spectrum of the AI market opportunity – from semiconductor manufacturing to PC, network, edge, and data center systems.

Nvidia is the market leader and charges a premium price for its H100 GPU-based hardware. However, Intel wants to compete in terms of both cost and performance.

Intel says its Gaudi 3 accelerators deliver big performance improvements for training and inference tasks on leading GenAI models. An 8,192-accelerator cluster of Gaudi 3 is said to be up to 40% faster at training compared to an equivalent Nvidia H100 GPU cluster. Gaudi 3 accelerators are also up to 15% faster at training for a 64-accelerator cluster on specific models.

They are up to 2x faster at inferencing speed on popular LLMs. Significant companies like Supermicro support Intel’s Gaudi 3 accelerators. Intel plans to expand its market presence with six more partners, including Asus, Foxconn, Gigabyte, Inventec, Quanta, and Wistron.

As the AI market grows, Intel is trying to compete with Nvidia by offering hardware with a better performance-to-cost ratio. However, it will be challenging to convince buyers who are loyal to Nvidia’s products and willing to pay more for what they see as better technology. Intel’s release of the Xeon 6700E Sierra Forest series is a significant milestone for the company.

It shifts towards better performance and efficiency in their Xeon lineup. The new series redefines what we expect from Xeon processors. The Xeon 6 lineup, especially the 6700E series, introduces E-cores focusing on efficiency.

See also  Nvidia works on certifying Samsung HBM chips

It removes Hyper-Threading, which aligns with current trends in hyper-scalers that don’t like SMT (Simultaneous Multi-Threading). The x86 architecture ensures legacy workloads will still be compatible without needing major changes, which is a major advantage for businesses.

The 6700E series offers a huge number of cores. Top-end models, like the 144-core Intel Xeon 6780E, have impressive specs with 108MB of L3 cache. These processors have TDPs of 330W and 250W, making them powerful and efficient.

This efficiency could lead to big energy savings. Estimates suggest up to 2.5 times power savings at the server level compared to older models like the Intel Xeon Gold 5218. The Sierra Forest lineup removes the need for a Platform Controller Hub (PCH).

This streamlines the architecture a lot. Not having a PCH makes Intel more like AMD’s EPYC series, which has been PCH-less since 2017. This change reduces latency and makes system design simpler.

The 6700E series also has embedded SST (Speed Select Technology) profiles, which add more customization options for specific uses. Tests of the 6700E series on QCT and Supermicro platforms showed impressive performance, especially for core-to-core latency.

The 144-core Xeon 6780E had consistent latency patterns. This matches the architecture design down to clusters of cores sharing 4MB L2 caches. These results confirm the efficiency and design improvements in the Sierra Forest architecture.

In dual-socket setups, the 6700E series stayed competitive in performance. But latency was expectedly higher than in single-socket setups. This reflects the trade-off between performance and density.

Intel has priced the 6700E series to match its advanced capabilities and higher core counts. For example, the 64-core Xeon 6710E is notable for those consolidating older, less efficient servers. It offers a cost-effective solution with big performance gains.

This supports Intel’s plan to capture the market segment moving from older 16-core and smaller servers.

See also  Intel showcases Lunar Lake AI processors at Computex

Intel’s shift in Xeon architecture

The Intel Xeon 6700E series is a big step for Intel’s Xeon product line.

It emphasizes efficiency and compatibility while delivering strong performance. As businesses look to modernize their data centers, the Sierra Forest processors provide a compelling option to enhance computing capabilities without needing big architecture changes. This release sets a new standard for Intel Xeon processors.

It promises both immediate and long-term benefits for a wide range of data center uses. Intel has officially unveiled its upcoming Lunar Lake SoC, the next generation of Core Ultra mobile processors. The announcement was made during Intel’s Tech Tour event in Taipei, on June 4-7 — just before the start of Computex 2024.

Lunar Lake represents a significant evolution in Intel’s mobile SoC lineup. It focuses on enhancing power efficiency and optimizing performance. The new SoC dynamically allocates tasks to efficient E-cores or performance P-cores based on workload demands.

It uses advanced scheduling mechanisms to ensure optimal power usage and performance. Intel’s Thread Director and Windows 11 play a key role in this process. It guides the OS scheduler to make real-time adjustments that balance efficiency with computational power.

The new P-core design, codenamed Lion Cove, brings enhanced IPC (Instructions Per Cycle) improvements. This reflects the generational leap in performance. The E-cores, known as Skymont, replace the previous Low Power Island Cresmont E-cores from Meteor Lake.

These E-cores combine efficiency gains from the TSMC N3B node with improved IPC, providing significant efficiency and performance improvements. The architecture integrates a new Neural Processing Unit (NPU) called NPU 4.

It is capable of delivering up to 48 TOPS (Tera Operations Per Second) of AI performance, which aligns with the requirements for Microsoft’s Copilot+ AI PCs and positions Lunar Lake prominently in the AI-capable SoC market.

The new GPU, Arc Xe2-LPG, boosts performance with second-generation Xe core graphics, adding to the chip’s overall computational power and AI capabilities. Lunar Lake’s tiles are not manufactured in Intel’s own foundry.

See also  Enhancing older MacBook's performance with updates, upgrades

Instead, Intel has outsourced this to TSMC, utilizing TSMC’s N3B and N6 processes. This decision shows Intel’s strategy of employing the best foundry available, whether internal or external, to enhance its product offerings. The compute tile, comprising P and E-cores, is built on TSMC’s N3B node.

The SoC tile uses the N6 node. The Lunar Lake platform also includes up to 32 GB of LPDDR5X memory on the chip package. It is arranged as a pair of 64-bit memory chips.

This on-package memory configuration results in a 128-bit memory interface optimized for power and performance. However, it limits users’ flexibility to upgrade DRAM at will. Lunar Lake introduces several AI and power management improvements:

The NPU 4 and the integrated Arc Xe2-LPG graphics contribute to an impressive 120 TOPS of total platform performance.

Intel has significantly enhanced Lunar Lake’s power management. The Thread Director uses a heterogeneous scheduling policy. It optimally assigns tasks to specific cores to improve efficiency and balance power usage with performance needs.

Integration with power management systems and Power Management Controllers (PMC) allows the SoC to make context-aware adjustments. This ensures minimal power wastage and optimal performance. The new architecture keeps power-sensitive applications within the efficiency core cluster.

This reduces power consumption by up to 35% in some use cases, such as video conferencing. Set to launch in Q3 2024, Intel’s Lunar Lake is poised to redefine the landscape of mobile SoCs. It focuses on power efficiency, AI performance, and collaborative manufacturing.

By updating their Thread Director and power management systems and utilizing advanced packaging technologies, Intel aims to deliver a compelling solution for the holiday 2024 market. Intel continues to innovate, adapting its strategies and technology to meet the evolving demands of modern computing. It stays competitive with rivals like Apple’s M-series and Qualcomm’s Snapdragon X chips.


About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist