DGX Station Desktop Workstation

The DGX Station is a complementary product to the DGX-1 server, enabling you to develop deep neural networks in a low latency environment before scaling them out to a data centre. Unlike earlier desk side deep learning workstations, the DGX Station is specifically designed for the office, with special low noise components and a convenient tower form factor.

Deep learning hardware

Like the DGX-1 server, the DGX Station is based on the latest NVIDIA Tesla V100 GPUs. These are based on the innovative Volta architecture that combines the performance of thousands of traditional CUDA cores with hundreds of the new Tensor cores that are specifically optimised for deep learning.

With four Tesla V100 cards, the DGX Station has 20,480 CUDA cores and 2,560 Tensor cores, providing a dramatic performance increase over previous generation deep learning workstations.

DGX Station480 TFLOPS
4x GPU Workstation170 TFLOPS
Dual CPU ServerX3 TFLOPS
Half-precision (FP16) performance
DGX Station15 hours
4x GPU Workstation18 hours
Dual CPU ServerX711 hours
Time to train ResNet50, 90 epochs to solution

DGX Station Architecture

Packing in 480 TFLOPS of FP16 performance the DGX Station is specially designed for deep learning and data analytics from the ground up. This includes the four Telsa V100 GPUs, which are linked together by the special NVLink bus, which communicates at 200GB/sec, more than five times faster than PCI-E, which operates at 32GB/sec. The diagram below highlights some of the key components in the DGX Station.

NVIDIA Tesla V100 Intel Xeon CPU
Next Generation NVIDIA NVLink Three Display Ports
Water Cooling 1/20 The Power Consumption


The DGX-1 is much more than a GPU server, it is a deep learning solution comprising a finely tuned combination of hardware and software. Running a GPU-optimised version of Ubuntu Server Linux, the software stack comprises drivers, the NVDocker container tool, deep learning SDK, NVIDIA Cloud Management Service plus NVIDIA DIGITS which is used to run deep learning frameworks such as Caffe, Torch, TensorFlow and many more.

The Operating System (Ubuntu Server Linux) is optimised to take advantage of the hardware and software features of the system and CUDA 8, especially with respect to memory management and hardware communications.

The deep learning frameworks provided with the system are especially optimised to take advantage of the NVlink communication links among other enhancements, in order to optimize multi-GPU communication in the system.


DGX Station
GPUs 4 x NVIDIA Tesla V100
CUDA Cores 20,480
Tensor Cores 2,560
GPU RAM 32GB next-gen HBM2 memory per GPU
CPUs Intel Xeon E5 2698 v4
CPU Cores 20 Phyisical, 20 HyperThreading
System RAM 256GB ECC Registered DDR4 2133MHz
Storage 1.92TB SSD for OS, 3 x 1.92TB SSD (RAID 0) for data
Display Outputs 3x Display Ports
Network Dual 10Gb LAN
Operating System Ubuntu Linux Server
Form Factor Tower
Dimensions 256 x 518 x 639mm (WxDxH)
Weight 40Kg
Operating Ambient Temperature 10 - 30°C
Noise Up to 35 dB
Configure System

NVIDIA DGX Systems Support

Find out more
Find out more Find out more