DGX-1 Deep Learning Supercomputer

The NVIDIA DGX-1 deep learning system comprises a combination of hardware and software that delivers faster and more accurate training of neural networks. Purpose built for deep learning and AI analytics, the DGX-1 delivers performance equivalent to 250 conventional CPU-only servers.

Configure DGX-1 Server

DGX-1 Performance

First-generation DGX-1 with NVIDIA Pascal

The first-generation DGX-1 is powered by eight NVIDIA Tesla P100 GPU accelerators, each with 3584 CUDA cores and 16GB of RAM. Unlike conventional GPU servers which use the PCI-E bus for communication between the host system and Tesla cards the DGX-1 uses NVLink, which is 5 to 12 times faster than PCI-E.

Second-generation DGX-1 with NVIDIA Volta

The second-generation of the DGX-1 is powered by eight NVIDIA Tesla V100 GPU accelerators which are based on the new Volta architecture. These cutting-edge GPUs combine both CUDA cores (5120) and the latest Tensor Cores (640) plus 16GB of RAM and are specifically designed for deep learning delivering a massive 5x speed up compared to the first-generation DGX-1. The second-generation DGX-1 is available to pre-order now with customers receiving a first-generation DGX-1 immediately, with a free upgrade to V100 GPUs later this year.

NVIDIA Tesla P100

NVLink unlocks the full performance of the eight Tesla V100 cards, so the DGX-1 delivers up to 960 teraFLOPs at half-precision (FP16), the most common format used in deep learning calculations. This is a dramatic speed up compared to deep learning on CPUs, saving you significant time and money.

DGX-1 with V100

8x GPU Server

Dual CPU Server

7.4 Hours

18 Hours

711 Hours

Time to train ResNet50, 90 epochs to solution

DGX-1 with V100

DGX-1 with P100

Dual CPU Server

960 TFLOPS

170 TFLOPS

3 TFLOPS

Half-precision (FP16) performance

DGX-1 Architecture

1. NVIDIA Tesla V100
The first GPU architecture to incorporate Tensor Core technology designed for AI

2. Next Generation NVIDIA NVLink
High-speed interconnect 300GB/s per GPU, 10X faster than current PCI-E Gen3 x16 interconnections.

3. Two Intel Xeon CPUs
For boot, storage management, and deep learning framework coordination.

4. Quad EDR IB
High-bandwidth and low-latency, with a total of 800GB/s of communication.

5. 3U Rackmount Case
Fits in a compact rack space, using 3200 Watts of maximum system power.

DGX-1 Architecture

DGX-1 Software

The DGX-1 is much more than a GPU server, it is a deep learning solution comprising a finely tuned combination of hardware and software. Running a GPU-optimised version of Ubuntu Server Linux, the software stack comprises drivers, the NVDocker container tool, deep learning SDK, NVIDIA Cloud Management Service plus NVIDIA DIGITS which is used to run deep learning frameworks such as Caffe, Torch, TensorFlow and many more.

The Operating System (Ubuntu Server Linux) is optimised to take advantage of the hardware and software features of the system and CUDA 8, especially with respect to memory management and hardware communications.

The deep learning frameworks provided with the system are especially optimised to take advantage of the NVlink communication links among other enhancements, in order to optimize multi-GPU communication in the system.

Deep Learning Software Deep Learning Software

Deep Learning Frameworks

Deep Learning User SortwareNVIDIA DIGITS

Accelerated Solutions

ManagementNVIDIA Cloud Management Service

Deep Learning LibrariesNVIDIA cuDNN and NCCL

Containerization ToolNVDocker

GPU DriverNVIDIA GPU Compute Driver Software

SystemGPU-Optimized Linux Server OS

DGX-1 Specification

DGX-1
Generation First-generation DGX-1 with NVIDIA Pascal Second-generation DGX-1 with NVIDIA Volta
GPUs 8x NVIDIA Tesla P100 8x NVIDIA Tesla V100
CUDA cores 28,672 40,960
Tensor cores 0 5,120
GPU RAM 16GB next-gen HBM2 memory per GPU
CPUs 2x Intel Xeon E5 2698 v4
CPU Cores 40 physical, 40 HyperThreading
System RAM 512GB ECC Registered DDR4 2133MHz
Storage 1x 480GB Intel S3610 SSD for OS, 4x 1.92TB SSDs in RAID 0 for data
Network 2x Intel 10 Gigabit LAN, 4x Mellanox MCX455A-ECAT Infiniband EDR, 1x GigaBit management LAN
Operating System Ubuntu Linux Server
Power Supply Redundant 3200W
Form Factor 3U rackmount
Dimensions 444 x 866 x 131mm (W x D x H)
Weight 61kg
Operating Ambient Temperature 10 – 30°C

DGX-1 Support

The DGX-1 is provided with comprehensive support which includes access to NVIDIA’s Cloud Management Service which enables remote access to the system. Support facilities include:

Direct access to NVIDIA technical engineers
24/7 phone support
Software upgrades and updates
Hardware support, including firmware upgrades, remote and onsite support
Next-day shipment of replacement parts
Online knowledge base

Flexible Deployment

The DGX-1 is available to for free Proof of Concept trial before purchase or as a hosted solution in the cloud where you pay a monthly subscription.

Contact Us

Scan is an NVIDIA Elite Solution Provider for Deep Learning and offers a range of reliable, high performance server solutions, fully customisable to your needs.

01204 474747