Intel Xeon Scalable
Server Platforms
Performance made flexible, next-gen
Intel server CPUs
Intel Xeon Scalable Platforms
Since its initial release in 2017 Intel has continued to develop new versions of its Xeon Scalable server CPUs. Currently there are two main versions of Xeon Scalable available, the high performance 4th gen based on the Sapphire Rapids architecture and the value-orientated 3rd gen based on the Ice Lake rchitecture. Click on the tabs below to find out more.
Intel 4th Gen Xeon Scalable Platform
Optimised for specific workloads, Intel 4th gen Xeon Scalable processors powered by the Sapphire Rapids architecture deliver reliable IT infrastructure. Performance and efficiency are the key metrics for success in IT. Servers powered by Xeon Scalable CPUs deliver rapid results, helping provide more and smarter insights for decision making, driving better business outcomes. Intel 4th gen Xeon Scalable CPUs – accelerated performance.
VIEW SOLUTIONS
Intel Sapphire Rapids Architecture
Cores
Up to 60 cores per socket
I/O
80 lanes of PCIe 5.0 for GPUs, SmartNICs, DPUs & NVMe SSDs
Memory
Support for up to 8TB of 8-channel DDR5 and Intel Optane Memory
Intel 4th gen Xeon Scalable Line Up
Intel Xeon Platinum 8400
High-end dual-socket servers for the most demanding workflows
Cores: up to 56
Level 3 Cache: 60-105MB
Max memory speed: 4,800MHz
Intel Optane Memory support
TDP: 300-350W
Architecture: Sapphire Rapids
Intel Xeon Gold 6400
Mid-range dual-socket servers for everyday workflows
Cores: up to 32
Level 3 Cache: 22.5-60MB
Max memory speed: 4,800MHz
Intel Optane Memory support
TDP: 195-270W
Architecture: Sapphire Rapids
Intel Xeon Gold 5400
Mid-range dual-socket servers for everyday workflows
Cores: up to 28
Level 3 Cache: 22.5-52.5MB
Max memory speed: 4,400MHz
Intel Optane Memory support
TDP: 185-205W
Architecture: Sapphire Rapids
Intel Silver 4400
Entry-level dual-socket servers for everyday workflows
Cores: up to 20
Level 3 Cache: 30-37.5MB
Max memory speed: 4,000MHz
TDP: 150-165W
Architecture: Sapphire Rapids
Intel Gold 6400U, Gold 5400U, Bronze 3400U
Designed for single-socket servers for everyday workloads
Cores: up to 32
Level 3 Cache: 22.5-60MB
Max memory speed: 4,800MHz
TDP: 125-250W
Architecture: Sapphire Rapids
Intel Xeon Platinum 8400H, Gold 6400H
High-end eight-socket and four-socket servers for the most demanding workflows
Cores: up to 60
Level 3 Cache: 22.5-112.5MB
Max memory speed: 4,800MHz
Intel Optane Memory support
TDP: 165-350W
Architecture: Sapphire Rapids
Intel Xeon Platinum 8400 Max, Gold 6400 Max
High-end dual-socket servers for
HPC workloads
Cores: up to 56
Level 3 Cache: 60-112.5MB
Max memory speed: 4,800MHz
64GB HBM2e memory
TDP: 330-350W
Architecture: Sapphire Rapids
Intel 4th gen Xeon Scalable Features and Benefits

The dawn of the accelerated era
Intel 4th gen Xeon CPUs mark a new chapter in the Xeon story, for the first time adding fixed function accelerators which speed up specific tasks, freeing up the main CPU cores for other workloads. These four accelerators, Intel IAA, DSA, DLB and QAT are explored in more detail below. In addition to these new dedicated accelerators the new Sapphire Rapids architecture adds support for 8-channels of 8TB of DDR5 memory and the I/O has been upgraded to 80 lanes of PCIe 5.0, providing double the bandwidth of PCIe 4.0. Finally, switching to a new 7nm manufacturing process enables the core count to be increased from 40 to 60 cores.

AI acceleration
Intel AMX, Advanced Matrix Extensions, accelerates deep learning and AI model training. AMX works by speeding up matrix-multiply tasks by up to 8x versus older VNNI instructions, boosting performance in tasks such as recommendation engines, natural language processing, image recognition, object detection and media analytics.

Database acceleration
Intel IAA, In-memory Analytics Accelerator, is a purpose-built accelerator that offloads from the CPU cores the compression and decompression of very high throughput in-memory databases. For example, in RocksDB IAA delivers 2x the operations per second versus traditional CPU cores. Not all CPUs include Intel IAA, please contact us to discuss your requirements.

Accelerated data transfers
Intel DSA, Data Streaming Accelerator, is a purpose-built accelerator that offloads from the CPU cores common data movement tasks between core components such as CPU and system memory as well as storage and network devices. For storage applications, DSA can improved I/O by up to 1.7x and decrease latency by up to 45%. Not all CPUs include Intel DSA, please contact us to discuss your requirements.

Network acceleration
Intel DLB, Dynamic Load Balancer, is a purpose-built accelerator that offloads from the CPU cores responsibility for allocating the distribution of network processing, dynamically allocating these tasks across the CPU as the load varies. DLB accelerates network tasks such as creating vRANs, secure gateways, packet processing and content delivery networks. Not all CPUs include Intel DLB, please contact us to discuss your requirements.

Crypto acceleration
Intel QAT, QuickAssist Technology, is a purpose-built accelerator that offloads from the CPU cores encryption and decryption tasks such as cryptography, key protection and data compression. This leaves the CPU cores free to serve a larger number of clients. Not all CPUs include Intel QAT, please contact us to discuss your requirements.

Accelerated HPC
Select 4th gen Xeon processors, known as Xeon Max CPUs, feature an additional 64GB of HBM2e memory. This can be set to act as a giant cache or homogenous memory pool with DRAM, accelerating HPC memory-bound workloads, modelling and simulation, manufacturing, AI and deep learning, energy and earth systems.

Expandable Memory
In addition to the 8TB of DDR5 DRAM and Optane Memory Intel 4th gen Xeon CPUs also support memory and accelerator modules using the CXL 1.1 protocol. CXL (Compute Express Link) is an open standard for CPU to CPU and CPU to memory connections using the same physical slots as PCIe. CXL provides a unified coherent memory space between CPUs and accelerators.

Persistent Memory
In addition to traditional DRAM Intel 4th gen Xeon Scalable CPUs from the Platinum and Gold ranges support PMEM in the form of Intel Optane Memory 300-series DIMMs. Being on the memory bus allows PMEM to have DRAM-like access to data, which means that it has nearly the same speed and latency of DRAM but the non-volatility of NAND flash. PMEM offers numerous advantages such as lower access latencies, support for larger datasets and hardware encryption and when compared to DRAM, PMEM modules come in much larger capacities and are less expensive per gigabyte. Servers can use PMEM modules as ultra-fast, persistent storage to accelerate tasks such as fraud detection, cyber threat analysis and web experience personalisation.
3XS Systems Server Solutions
As an Intel Platinum partner Scan has developed a range of server solutions powered by the Intel Xeon Scalable platform. Built by our award-winning in-house 3XS Systems division these solutions are optimised for a wide variety of workflows.