Find out more
Scan AI

NVIDIA Virtual Compute Server (vCS)

NVIDIA vCS

NVIDIA Virtual Compute Server (vCS) enables the benefits of hypervisor-based server virtualisation for GPU-accelerated servers. Datacentre admins are now able to power any compute-intensive workload with GPUs in a virtual machine (VM). vCS software virtualises NVIDIA GPUs to accelerate large workloads, including more than 600 GPU-accelerated applications for AI, deep learning, and HPC.

With GPU sharing, multiple VMs can be powered by a single GPU, maximising utilisation and affordability, or a single VM can be powered by multiple virtual GPUs, making even the most intensive workloads possible.

  • Maximise utilisation - increase utilisation and productivity with both GPU sharing and aggregation of multiple GPUs
  • Live migration - live migrate GPU-accelerated VMs without disruption, easing maintenance and upgrades
  • Security - extend the benefits of server virtualisation to GPU workloads
  • Multi-tenant - isolate workloads and securely support multiple users
  • Rapid deployment - leverage GPU-optimised NGC containers for AI, data science, and HPC
  • Reliability - prevent against data corruption with error-correcting code (ECC) and dynamic page retirement
  • Enterprise software support - get support with NVIDIA Enterprise and NVIDIA NGC Support Services
nvidia vgpu
 

GPU Sharing

Fractional GPU sharing possible with NVIDIA vGPU technology. It enables multiple VMs to share a GPU, maximising utilisation for lighter workloads that require GPU acceleration. Up to 32 users can share a single GPU.

 

GPU Aggregation

With GPU aggregation, a VM can access more than one GPU, which is often required for compute-intensive workloads. vCS supports both multi-vGPU and peer-to-peer computing. Using NVLink for higher bandwidth.

 

Management and Monitoring

vCS provides support for app-, guest-, and host-level monitoring. Proactive management features provide the ability to do live migration, suspend and resume, and create thresholds that expose consumption trends impacting user experiences.

 

NGC

NVIDIA GPU Cloud (NGC) is a hub for GPU-optimised software that simplifies workflows for deep learning, machine learning, and HPC, and now supports virtualised environments with NVIDIA vCS.

 

Peer-to-Peer Computing

NVLink is a high-speed, direct GPU-to-GPU interconnect that provides higher bandwidth, more links, and improved scalability for multi-GPU system configurations—now supported virtually with NVIDIA virtual GPU (vGPU) technology.

 

ECC & Page Retirement

Error correction code (ECC) and page retirement provide higher reliability for compute applications that are sensitive to data corruption. They’re especially important in large-scale cluster-computing environments where GPUs process very large datasets and/or run applications for extended periods

 

Multi-Instance GPU

Multi-Instance GPU (MIG) is a revolutionary technology that can extend the capabilities of the datacentre that enables each NVIDIA A100 GPU to be partitioned into up to seven instances, fully isolated and secured at the hardware level with their own high-bandwidth memory, cache, and compute cores.

 

GPUDirect

GPUDirect uses remote direct memory access (RDMA) technology to enable network devices to directly access GPU memory, bypassing CPU host memory, decreasing GPU-to-GPU communication latency, and completely offloading the CPU.

FEATURE LIST
Features NVIDIA vCS
Windows OS Support  
Linux OS Support chart check
NVIDIA Compute Driver chart check
ECC reporting and handling chart check
Management and monitoring chart check
Multi-GPU Support chart check
Maximum Hardware Rendering 1X 4K
Maximum resolution 4096 x 2160
FEATURE LIST
Specification A100 V100S A40 RTX 8000 RTX 6000 T4
Memory 40GB HBM2 32GB HBM2 48GB GDDR6 48GB GDDR6 24GB GDDR6 16GB GDDR6
Peak FP32 19.5 TFLOPS 16.4 TFLOPS 38.1 TFLOPS 14.9 TFLOPS 14.9 TFLOPS 8.1 TFLOPS
Peak FP64 9.7 TFLOPS 8.2 TFLOPS - - - -
NVLink number of GPUs per VM Up to 4 Up to 8 2 2 2 -
Multi-GPU per VM Up to 16 Up to 16 Up to 16 Up to 16 Up to 16 Up to 16

NVIDIA vCS Solutions

There are a number of supported GPUs for vCS deployment, depending on what performance is required and how many GPUs are to be shared per virtual machine. vCS also supports NVIDIA NGC GPU-optimised software for deep learning, machine learning, and HPC. NGC software includes containers for the top AI and data science software, tuned, tested, and optimised by NVIDIA, as well as fully-tested containers for HPC applications and data analytics.

Regarding product licensing, unlike NVIDIA vPC and NVIDIA vDWS, vCS is not tied to a user with a display. It is licensed per GPU as a 1-year subscription with NVIDIA enterprise support included. This allows a number of compute workloads in multiple VMs to be run on a single GPU, maximising utilisation of resources and your investment. With these hardware and licensing considerations in mind, there are three main routes to creating a vCS solution.

Use your existing hardware

If the servers you have are equipped already with sufficient GPUs for your chosen performance requirement, then you just need to buy NVIDIA vCS licenses. Please contact us to discuss your NVIDIA vCS licensing requirements.

Find out more Find out more