The Processor

The IBM POWER9 processor - built from the ground-up for data intensive workloads, POWER9 is the only processor with state-of-the-art I/O subsystem technology, including next generation NVIDIA NVLink, PCIe Gen4, and OpenCAPI. Combined with up to six class-leading NVIDIA GPU accelerator cards to deliver an agile AI experience – and integrated into the Scan AI ecosystem to offer comprehensive data scientist, engineer and configuration support.

Emerging analytics, AI and cognitive

  • New core for stronger thread performance
  • Delivers 2x compute resource per socket
  • Built for acceleration – OpenPOWER solution enablement
  • Extreme processor performance with accelerator bandwidth and reduced latency

Highest bandwidth GPU attach

  • Advanced GPU/CPU interaction and memory sharing
  • High bandwidth direct attach memory
  • Coherent memory and virtual addressing capability for all accelerators
  • Open and flexible solutions built for a broad range of deployments and platforms

Power / packaging / cost optimizations for a range of platforms

  • Superior virtualization features: security, power management, QoS, interrupt
  • Industry-leading IO subsystem with PCIeG4 to deliver 2x bandwidth to network and storage devices
  • The ideal platform for hardware acceleration
  • Ideal platform for developers, enabling open innovation

Large, flat, scale-up systems up to 16 sockets

  • Buffered memory for maximum capacity (up to 4TB / socket)
  • Leading RAS
  • Over 120MB of last level cache (>3x Skylake LLC size)
  • State of the art I/O and acceleration attachment signalling

The Solution

IBM Power System Accelerated Compute Server (AC922) delivers unprecedented performance for modern HPC, analytics, and artificial intelligence. Enterprises can now deploy data-intensive workloads, like deep learning frameworks and accelerated databases, with confidence. AC922 enables the cutting-edge AI innovation data scientists desire, with the dependability IT requires. This is IT infrastructure redesigned for enterprise AI.

Faster I/O - up to 5.6x more I/O bandwidth than x86 servers

The AC922 includes a variety of next-generation I/O architectures, including: PCIe Gen4, CAPI 2.0, OpenCAPI and NVLINK. These interconnects provide 2 to 5.6 times as much bandwidth for today’s data-intensive workloads compared with the PCIe3 Bus Gen3 found in x86 servers.

The Best GPUs

2-6 NVIDIA Tesla V100 GPUs with NVLink - the AC922 pairs POWER9 CPUs and NVIDIA Tesla V100 with NVLink GPUs which delivers up to 5.6x times the performance for each pairing. This is the only server capable of delivering this I/O performance between CPUs and GPUs. This provides massive throughput capability for HPC, deep learning and AI workloads.

Extraordinary CPUs

2x POWER9 CPUs, designed for AI - while blazingly fast on their own, POWER9 CPUs truly excel in their ability to fully maximize the performance of everything around them. Built for the AI era, the POWER9 supports up to 5.6x more I/O and 2x more threads than its x86 contemporaries. The POWER9 is available on configurations with anywhere between 16 and up to 44 cores in the AC922 server.

Enterprise-Ready

PowerAI DL frameworks with IBM support - PowerAI DL frameworks simplifies deep-learning deployment and performance. Unlocks a new, simpler end-to-end toolchain for AI users. Proven AI performance and scalability enable you to start with one node, then scale to a rack or thousands of nodes with near linear scaling efficiency.

Next Gen PCIe

PCIe Gen4 2x faster vs PCIe Gen3 in x86 - the AC922 is the industry's first server to feature the next generation of the industry standard PCIe interconnect. PCIe generation 4 delivers approximately 2x the data bandwidth of the PCIe generation 3 interconnect found in x86 servers.

An Acceleration Superhighway

Designed to fully exploit the capabilities of its GPU accelerators, by eliminating I/O bottlenecks and sharing memory accross GPUs and CPUs.

Designed for the AI Era

The AC922 was designed to accommodate the data-intensive characteristics of the modern analytics, HPC and AI workloads.

Enterprise Grade

Combines the cutting edge AI innovation Data Scientists desire with the dependability IT requires.

Power9 Performance

Built for the AI era, the POWER9 CPU supports up to 5.6x more I/O and 2x more threads than its x86 competitors.

AC922 Specification

Feature AC922
MTM 8335-GHT | 8335-GTX
System Packaging 2U
Processor Socket 2S
Number of Cores Up to 44 Cores
Number of GPUs 4 or 6 NVIDIA Tesla GPU Processors (NVLink 2.0 attached)
Memory DIMM Slots 16
Memory - Max 1TB
HDD / SSD Two SFF (2.5") SATA drives for
Max 4TB (HDD)
Max 7.68TB (SSD)
PCIe G4 Slot 4 Slots

The PowerAI Platform

PowerAI makes deep learning, machine learning, and AI more accessible and more performant. By combining this software platform for deep learning with IBM Power Systems, enterprises can rapidly deploy a fully optimized and supported platform for AI with blazing performance.

Power AI Software Distribution
Deep Learning Frameworks & Enhancements TensorFlow Caffe IBM | Caffe Watson APIs
IBM Research | Distributed Deep Learning Power Systems | Large Model Support AI Vision Tools
Supporting Capabilities & Libraries Distributed Frameworks IBM Research | AI Vision Runtime IBM Spectrum Conductor Distributed
NVIDIA Digits OpenBLAS Bazel NVIDIA NCCL
IBM Services & Support IBM | Entire Stack Support IBM Research | Pioneering AI Research Education & Certification Power Systems | Optimization & Testing
IBM Power Accelerated Servers: Ideal for PowerAI
IBM Power AC922 Acceleration Superhighway Designed for the AI Era Enterprise Grade POWER9 Performance

Distributed as easy to install binaries

A binary distribution ensures simple installation and distribution of the latest updates via a repository for Red Hat Linux.

Includes most popular deep learning frameworks

PowerAI is built with leading deep learning frameworks that have optimized for IBM Power Systems and are ready for enterprise deployment.

Includes all dependencies and libraries

Built with the hundreds of dependencies and the optimal parallel libraries for deep learning performance.

Validated deep learning platform with each release

Seamless update experience - each release is tested on supported hardware and software configurations for functionality & performance.

Built for superior interfaces to unlock larger DL datasets

PowerAI takes advantage of the CPU:GPU NVLink interconnect to help support and load larger deep learning models than ever before. Train datasets that could never be trained before.

PowerAI Vision

PowerAI Vision includes an intuitive toolset that empowers subject matter experts to label, train, and deploy deep learning vision models, without coding or deep learning expertise. It includes the most popular deep learning frameworks and their dependencies, and it is built for easy and rapid deployment and increased team productivity.

• Streamline processes to label, train, monitor and deploy

• Train models to classify images and detect objects

• Introducing auto labeling with deep learning models

• Video analytics made easy for training and inference

• Extend AI solutions with custom models

• Deploy models on-premises, in the cloud, and on edge devices

PowerAI Enterprise

In addition to the portfolio of frameworks and vision tools available in PowerAI, Power AI Enterprise offers features that allow almost unlimited scaling of models and clusters without compromise or reductions in performance. For instance, Large Model Support (LMS) facilitates the use of system memory with little to no performance impact, yielding significantly larger and more accurate deep learning models. This has benefits for models where image sizes may be very large, such as in healthcare, where otherwise image stitching would be required.

Another feature is Distributed Deep Learning (DDL), which offers the ability to manage multiple GPU servers as a single entity and scale a model with almost no loss of performance.

This provides the ability to confidently scale a deep learning model as you add server nodes to your infrastructure as demand dictates. To find out more about PowerAI Enterprise please contact our AI team.

Scaling results using Caffe with PowerAI DLL to train a ResNet-50 model using the ImageNet-1K data set on 64 Power8 servers that have a total of 256 NVIDIA P100 GPUs (Source: IBM)

Find out more Find out more