The DGX Station is a complementary product to the DGX-1 server, enabling you to develop deep neural networks in a low latency environment before scaling them out to a data centre. Unlike earlier desk side deep learning workstations, the DGX Station is specifically designed for the office, with special low noise components and a convenient tower form factor.
Like the DGX-1 server, the DGX Station is based on the latest NVIDIA Tesla V100 GPUs. These are based on the innovative Volta architecture that combines the performance of thousands of traditional CUDA cores with hundreds of the new Tensor cores that are specifically optimised for deep learning. With four Tesla V100 cards, the DGX Station has 20,480 CUDA cores and 2,560 Tensor cores, providing a dramatic performance increase over previous generation deep learning workstations.
4x GPU Workstation
Dual CPU Server
Time to train ResNet50, 90 epochs to solution
4x GPU Workstation
Dual CPU Server
Half-precision (FP16) performance
Packing in 480 TFLOPS of FP16 performance the DGX Station is specially designed for deep learning and data analytics from the ground up. This includes the four Telsa V100 GPUs, which are linked together by the special NVLink bus, which communicates at 200GB/sec, more than five times faster than PCI-E, which operates at 32GB/sec. The diagram below highlights some of the key components in the DGX Station.
|1||NVIDIA Tesla V100||The most advanced AI GPU ever built, optimised for AI with Tensor Core architecture.|
|2||Next Generation NVIDIA NVLink||High-speed interconnect 200GB/s per GPU, 5X faster than current 16x PCI-E 3.0 interconnections.|
|3||Water Cooling||Designed to be whisper quiet, making it a perfect addition to your desk while still being optimized for performance.|
|4||Intel Xeon CPU||Optimised for boot, storage management, and deep learning framework coordination.|
|5||Three Display Ports||All fully supporting 4K monitor resolution.|
|6||1/20 The Power Consumption||One DGX Station has the compute capacity of four racks of servers, and requires much less power.|
The DGX Station is much more than a GPU Workstation, it is a deep learning solution comprising a finely tuned combination of hardware and software. Running a GPU-optimised version of Ubuntu Server Linux, the software stack comprises drivers, the NVDocker container tool, deep learning SDK, NVIDIA Cloud Management Service plus NVIDIA DIGITS which is used to run deep learning frameworks such as Caffe, Torch, TensorFlow and many more.
The Operating System (Ubuntu Server Linux) is optimised to take advantage of the hardware and software features of the system and CUDA 8, especially with respect to memory management and hardware communications.
The deep learning frameworks provided with the system are especially optimised to take advantage of the NVlink communication links among other enhancements, in order to optimize multi-GPU communication in the system.
Deep Learning Frameworks
Deep Learning User SortwareNVIDIA DIGITS
ManagementNVIDIA Cloud Management Service
Deep Learning LibrariesNVIDIA cuDNN and NCCL
GPU DriverNVIDIA GPU Compute Driver Software
SystemGPU-Optimized Linux Server OS
|GPUs||4x NVIDIA Tesla V100|
|GPU RAM||16GB next-gen HBM2 memory per GPU|
|CPUs||Intel Xeon E5 2698 v4|
|CPU Cores||20 physical + 20 HyperThreading|
|System RAM||256GB ECC Registered DDR4 2133MHz|
|Storage||1.92TB SSD for OS, 3x 1.92TB SSD (RAID 0) for data|
|Display Outputs||3x DisplayPort|
|Network||Dual 10Gb LAN|
|Operating System||Ubuntu Linux Server|
|Dimensions||256 x 518 x 639mm (W x D x H)|
|Operating Ambient Temperature||10 – 30°C|
|Noise||Up to 35 dB|
With the DGX Station you get access to Scan’s in-house system engineers and data scientists plus enterprise-grade support with deep learning expertise from NVIDIA. Support facilities include:
Direct access to NVIDIA technical engineers
24/7 phone support
Software upgrades and updates
Hardware support, including firmware upgrades, remote and onsite support
Next-day shipment of replacement parts
Online knowledge base
Scan 3XS offers a range of reliable, high performance server solutions, fully customisable to your needs.