NVIDIA DGX-2 deep learning supercomputer
The world's most powerful AI system
NVIDIA DGX-2 - innovate even faster
Experience new levels of AI speed and scale with NVIDIA DGX-2, the first 2 petaFLOPS system that combines 16 fully interconnected GPUs for 10x the deep learning performance. It’s powered by NVIDIA DGX software and a scalable architecture built on NVIDIA NVSwitch, so you can take on the world’s most complex AI challenges. The DGX-2 is only available from NVIDIA Elite Solution providers.
Deep Learning Performance
The DGX-2 is powered by sixteen NVIDIA Tesla V100 GPU accelerators which are based on the new Volta architecture. These cutting-edge GPUs combine both CUDA cores (5120) and the latest Tensor Cores (640) plus 32GB of RAM and are specifically designed for deep learning delivering a massive 10x speed up compared to the DGX-1.
Leading edge research demands the freedom to leverage model parallelism and requires never-before-seen levels of inter-GPU bandwidth. NVIDIA has created NVSwitch to address this need. Like the evolution from dial-up to ultra-high speed broadband, NVSwitch delivers a networking fabric for the future, today.
Unbeatable Compute Power
Increasingly complex AI demands unprecedented levels of compute. NVIDIA DGX-2 is the world’s first 2 petaFLOPS system, packing the power of 16 of the world’s most advanced GPUs, accelerating the newest deep learning model types that were previously untrainable. With groundbreaking GPU scale, you can train models 4x bigger on a single node with 10x the performance of an 8-GPU system. In comparison with legacy x86 architectures, DGX-2’s ability to train ResNet-50 would require the equivalent of 300 servers with dual Intel Xeon Gold CPUs costing over $2.7 million dollars.
NVIDIA DGX-2 Delivers 10 Times Faster Deep Learning Training
NVIDIA Tesla V100 32GB, SXM3
16 Total GPUs for both boards
12 Total NVSwitches High Speed Interconnect
8 EDR Infiniband/100 GbE Ethernet
PCIE Switch Complex
Two Intel Xeon Platinum CPUs
1.5 TB System Memory
Dual 10/25 GbE Ethernet
30 TB NVMe SSDs Internal Storage
The DGX-2 is much more than a GPU server, it is a deep learning solution comprising a finely tuned combination of hardware and software. Running a GPU-optimised version of Ubuntu Server Linux, the software stack comprises drivers, the NVDocker container tool, deep learning SDK, NVIDIA Cloud Management Service plus NVIDIA DIGITS which is used to run deep learning frameworks such as Caffe, Torch, TensorFlow and many more.
The Operating System (Ubuntu Server Linux) is optimised to take advantage of the hardware and software features of the system and CUDA 8, especially with respect to memory management and hardware communications.
The deep learning frameworks provided with the system are especially optimised to take advantage of the NVlink communication links among other enhancements, in order to optimize multi-GPU communication in the system.
|DGX-2 with NVIDIA Volta|
|GPUs||16 x NVIDIA Tesla V100 (8 per Board)|
|GPU Memory||512GB Total HBM2 Memory (32GB per GPU)|
|NVIDIA CUDA Cores||81,920|
|NVIDIA Tensor Cores||10,240|
|Maximum Power Usage||10 kW|
|CPU||Dual Intel Xeon Platinum 8168, 2.7 GHz, 24-Cores|
|Network||8X 100Gb/sec | Infiniband/100GigE | Dual 10/25Gb/sec Ethernet|
|Storage||OS: 2x 960GB NVME SSDs | Internal Storage: 30TB | (8x 3.84TB) NVME SSDs|
|Software||Ubuntu Linux OS - See Software stack for details|
|System Weight||340 lbs (154.2 kgs)|
|System Dimensions||Height: 17.3 in (440.0 mm) | Width: 19.0 in (482.3 mm) | Length: 31.3 in (795.4 mm) - No Front Bezel 32.8 in (834.0 mm) - With Front Bezel|
|Operating Temperature Range||5°C to 35°C (41°F to 95°F)|
Try DGX solutions in the cloud
We want you to be sure that DGX is right for you, so provide the ability to try your own data on one of our deep learning servers as Proof of Concept. Hosted in a secure datacentre, we will provide you with remote access to a DGX solution so you can evaluate and benchmark it.Enquire now