DGX in the Cloud

Deep Learning solutions such as the DGX-1 provide unparalleled processing power, with up to 170 teraFLOPs of compute performance available from a single server.

However, with such high performance available the DGX-1 has very specific power and cooling requirements and so must only be installed in a qualified server facility. In addition, the high unit cost of the DGX-1 make it a significant one-off capital investment.

Deep Learning Screens

Why Rent?

Renting a DGX-1 gives you the ultimate flexibility, allowing you to increase processing power when you have a project that requires a fast turn-around.

Flexible Rental

DGX-1 deep learning servers are available for rental to business and education customers on a flexible basis with a minimum 1 day rental agreement depending on the size of your data. At the end of the rental period you have the option of buying the server at a discount or can upgrade to a new server to take advantage of the latest technology.

How it Works

Scan 3XS Enterprise takes the risk for you, providing you with remote access to a DGX-1 server hosted in a secure datacentre. Depending on the size of your data you can upload this to the server or the data can be securely transported to our datacentre and uploaded directly to the server.

Features Benefits
Flexible rental agreements from 1 to 12 months Only rent a server when you need the processing time
Invoiced monthly rather than capital investment Defer payment while awaiting purchase approval
Remote hosted server No need to invest in your server facility

The DGX-1 is a turn-key solution built specifically for deep learning; if you are seeing unexpected results or you are having technical problems our engineers are available 24/7 to help you.

DGX-1 Specification

Introducing the world's first Deep Learning Supercomputer, the DGX-1. Powered by eight NVIDIA Tesla P100 GPU accelerators specifically built for deep learning and machine learning, the DGX-1 will provide you with the fastest possible research so you can explore multiple network architectures to speed up your Deep Learning.

Performance 170 teraFLOPs of half-precision (FP16) peak performance
GPU Architecture 8x NVIDIA Tesla P100 GPU accelerators based on the next-gen Pascal architecture
GPU RAM 16GB next-gen HBM2 memory per GPU
CUDA Cores 28672
CPUs 2x Intel Xeon E5 2698 v4, 20 cores 2.2GHz
System RAM 512GB 2133MHz DDR4 ECC Registered
Storage 4x 1.92TB SSDs in RAID 0
Network Dual 10GbE
Operating System Ubuntu Server
Chassis 3U 19in rackmount chassis with 3200W PSU
DGX-1 Deep Learning Server