Deep Learning & AI - Inferencing

Deep Learning Inferencing Servers

These custom-built deep learning servers are optimised for inferencing using the latest NVIDIA Tesla GPU accelerators. The following systems are fully configurable to meet your requirements, if you can't see the specification you would like please call 01204 474747 or email [email protected]

1

3XS Deep Learning GI2000-AR1
  • Up to 2x NVIDIA Tesla T4 inferencing GPUs
  • 2x Intel Xeon Scalable CPUs with up to 26-cores
  • Up to 1TB 6-channel DDR4 2666 ECC Registered
  • 2x M.2 SSD, 8x 3.5in HDDs/SSDs
  • Dual 800W PSUs
  • 2U 19in rack server
  • 3 Year Premium Warranty

This compact 2U server can support up to two NVIDIA Telsa T4 GPUs. These GPUs feature a mix of CUDA and Tensor cores and are specially designed to accelerate inferencing workloads, with up to 36x speed up compared to inferencing on a CPU. The host server can support a pair of Intel Xeon CPUs plus multiple SSDs and hard disks.

Award Winning

Next Day Delivery

Priority Build Option

Professional Build

System Powered By
Intel-Xeon NVIDIA Testla
Order now for delivery between Thursday 22 Nov - Friday 23 Nov
Price from £4842.83 ex VAT £5811.40 inc VAT