

High performance training solutions
Using a custom training system for deep learning and AI workloads gives you the ultimate control. Not only in that you can choose the ideal specification for your projects but also in that you can build in flexibility as required. A system can be configured so that no resources are under utilised, or a larger chassis can be partially populated at purchase leaving space for scaling at a later date. The choice is yours.
Every 3XS custom training system is almost infinitely configurable from accelerator cards to CPU, memory to storage, right through to connectivity, power, cooling and software - all from the market leading component brands listed below.

NVIDIA GPU Accelerators
The NVIDIA Ampere family of GPU accelerator cards represents the cutting edge in performance for all AI workloads, offering unprecedented compute density, performance, and flexibility to deliver up to 5 petaFLOPS AI performance in a single system. The high-end NVIDIA A100 accelerator is available in either standard PCIe or high-density SXM4 formats featuring HBM2 memory, with the mid-range A30 accelerator available as a PCIe card. Any of these Ampere passively cooled GPUs offer the flexibility to be installed in a wide variety of both air and liquid cooled server chassis.
NVIDIA HGX Server
8x A100 SXM4
4x A100 SXM4
NVIDIA EGX Server
A100 PCIe Gen 4
A30 PCIe Gen 4
Alternative Accelerators
In addition to enterprise-class GPUs there are numerous other acceleration devices that can aid deep learning and AI training workloads. These cards may be for specific tasks, allow programmability or meet a tighter budget requirement.
RTX GPUs
Alveo PCIe Gen4
FPGAs
FPGAs
Host CPUs
Either AMD EPYC or Intel Xeon Scalable processors can be chosen when designing your server. Both, now in their 3rd Generation offer expansive ranges of models delivering performance for every budget - all supporting PCIe 4.0 with 64 lanes. Additionally EPYC P-series processors allow for single socket configurations where GPU acceleration will be the primary server use, making a server as cost-effective as possible.


System Memory
Depending on the type of workload, a large amount of system memory may have less or more relevance than GPU memory, but with a custom training server memory capacity can be tailored to your needs. Additionally, a bespoke server allows for simple future memory expansion is required.NVIDIA recommends at least double the amount of system RAM as GPU RAM, so high-end systems may scale into the TBs. Additionally Intel Xeon based servers can make use of a combination of traditional DIMMs and Intel Persistent Optane Memory DIMMs, allowing a flexible solution addressing performance, fast caching and extra storage capacity.

Internal Storage
Storage within a training server is also a very personal choice - it may be that a few TB of SSD capacity are enough for datasets for financial organisations where a large volume of files is still relatively small. Alternatively, image-based datasets may be vast, so there is never any real option of using internal storage and a separate fast flash storage array is the way to go. If this is thecase, internal SSD cost can be minimised and this remaining budget used elsewhere. Flexibility and performance can also be gained by choosing M.2 formats, NVMe connectivity or Optane options. as required.
Networking
Depending on whether connectivity is needed to a wider network, or an external flash storage array, networking interfaces and speeds can be customised to suit. Ethernet or Infiniband options are available up to 400Gb/s in speed, both providing powerful CPU offloading to maximise performance, and minimise latency.
Additionally, advanced NVIDIA BlueField Data Processing Unit (DPU) NICs can be specified where the highest performance is required, as these cards not only include networking functionality but also accelerate software management, security and storage services by offloading these tasks from the CPU.
Chassis
From 2U compact servers up to 4U expandable systems, chassis choice is key dependant upon whether space saving is the key factor or scalability is required. As a custom server can be partially populated, a larger chassis can be chosen with a view to expandability in the future. Additionally, both air cooled and liquid cooled server systems are available.
GPU Virtualisation
NVIDIA vCS

NVIDIA Virtual Compute Server (vCS) enables the benefits of hypervisor-based server virtualisation for GPU-accelerated servers. Datacentre admins are now able to power any compute-intensive workload with GPUs in a virtual machine (VM). vCS software virtualises NVIDIA GPUs to accelerate large workloads, including more than 600 GPU-accelerated applications for AI and deep learning.
With GPU sharing, multiple VMs can be powered by a single GPU, maximising utilisation and affordability, or a single VM can be powered by multiple virtual GPUs, making even the most intensive workloads possible.
Run:AI

It may be that over time, rather than a single bespoke training server, you end up with several systems as technologies advance and workloads increase. Although servers with difference CPUs, GPUs and storage will communicate effectively when using common networking interfaces, it may be that you aren’t getting the maximum utilisation from the various GPUs you have. In this case Run:AI GPU virtualisation software may be able to help.
Run:AI works by pooling different GPU resources into a virtual pool and allowing workloads to be scheduled by user or project across the available resource, ensuring that no hardware or data scientist dips in productivity.
Our intuitive online configurators provide complete peace of mind when building your training server
alternatively speak directly to one of our friendly system architects.