DGX-1 Deep Learning Supercomputer

The NVIDIA DGX-1 deep learning system comprises a combination of hardware and software that delivers faster and more accurate training of neural networks. Purpose built for deep learning and AI analytics, the DGX-1 delivers performance equivalent to 250 conventional CPU-only servers.

DGX-1 Performance

The DGX-1 is powered by eight NVIDIA Tesla P100 GPU accelerators, each with 3584 CUDA cores and 16GB of RAM. Unlike conventional GPU servers which use the PCI-E bus for communication between the host system and Tesla cards the DGX-1 uses NVLink, which is 5 to 12 times faster than PCI-E.

NVIDIA Tesla P100

NVLink unlocks the full performance of the eight Tesla P100 cards, so the DGX-1 delivers up to 170 teraFLOPs at half-precision (FP16), the most common format used in deep learning calculations. This is a dramatic speed up compared to deep learning on CPUs, saving you significant time and money.



2 Hours

150 Hours

Time to train 1.28M images using Caffe with 90 epochs





Half-precision (FP16) performance

DGX-1 Software

The DGX-1 is much more than a GPU server, it is a deep learning solution comprising a finely tuned combination of hardware and software. Running a GPU-optimised version of Ubuntu Server Linux, the software stack comprises drivers, the NVDocker container tool, deep learning SDK, NVIDIA Cloud Management Service plus NVIDIA DIGITS which is used to run deep learning frameworks such as Caffe, Torch, TensorFlow and many more.

The Operating System (Ubuntu Server Linux) is optimised to take advantage of the hardware and software features of the system and CUDA 8, especially with respect to memory management and hardware communications.

The deep learning frameworks provided with the system are especially optimised to take advantage of the NVlink communication links among other enhancements, in order to optimize multi-GPU communication in the system.

Deep Learning Software Deep Learning Software

Deep Learning Frameworks

Deep Learning User SortwareNVIDIA DIGITS

Accelerated Solutions

ManagementNVIDIA Cloud Management Service

Deep Learning LibrariesNVIDIA cuDNN and NCCL

Containerization ToolNVDocker

GPU DriverNVIDIA GPU Compute Driver Software

SystemGPU-Optimized Linux Server OS

DGX-1 Specification

GPUs 8x NVIDIA Tesla P100
CUDA cores 28,672
GPU RAM 16GB next-gen HBM2 memory per GPU
CPUs 2x Intel Xeon E5 2698 v4
CPU Cores 40 physical, 80 HyperThreading
System RAM 512GB ECC Registered DDR4 2133MHz
Storage 1x 480GB Intel S3610 SSD for OS, 4x 1.92TB SSDs in RAID 0 for data
Network 2x Intel 10 Gigabit LAN, 4x Mellanox MCX455A-ECAT Infiniband EDR, 1x GigaBit management LAN
Operating System Ubuntu Linux Server
Power Supply Redundant 3200W
Form Factor 3U rackmount
Dimensions 444 x 866 x 131mm (W x D x H)
Weight 61kg
Operating Ambient Temperature 10 – 30°C

DGX-1 Support

The DGX-1 is provided with comprehensive support which includes access to NVIDIA’s Cloud Management Service which enables remote access to the system. Support facilities include:

Direct access to NVIDIA technical engineers
24/7 phone support
Software upgrades and updates
Hardware support, including firmware upgrades, remote and onsite support
Next-day shipment of replacement parts
Online knowledge base

Flexible Deployment

The DGX-1 is available to for free Proof of Concept trial before purchase or as a hosted solution in the cloud where you pay a monthly subscription.