NVIDIA DGX B300 AI Appliance

The ultimate building block for AI factories powering AI reasoning

The Ultimate AI Reasoning Appliance by NVIDIA

This sixth-generation DGX datacentre AI appliance is built around the Blackwell Ultra architecture and the flagship B300 accelerator, providing an 11x inference and 4x training performance increase over the previous generation.

The DGX B300 includes eight 800Gb/s Connect-X8 Smart NICs and two Bluefield-3 DPUs for connecting to external storage, supported by the NVIDIA Base Command management suite and the NVIDIA AI Enterprise software stack, backed by specialist technical advice from NVIDIA DGXperts.

Enquire Now
DGX B300

Accelerated AI with NVIDIA B300 GPUs

The latest DGX B300 appliance features eight SXM6 B300 Blackwell Ultra Tensor Core GPUs, featuring a total of 2.3TB of memory, delivering unmatched performance and energy efficiency. The B300 GPUs are connected by 5th gen NVLink technology, creating the fastest AI platform available today.

Tesla Card
NVIDIA DGX B300
GPUs 8x NVIDIA B300
FP4 TENSOR CORE** TBC
FP8/FP6 TENSOR CORE* TBC
INT8 TENSOR CORE* TBC
FP16/BF16 TENSOR CORE* TBC
TF32 TENSOR CORE* TBC
FP32 TBC
FP64/FP64 TENSOR CORE* TBC
ARCHITECTURE Blackwell Ultra
CUDA CORES TBC
TENSOR CORES TBC
TOTAL GPU MEMORY 2.3TB HBM3e
MEMORY CONTROLLER TBC
NVLINK 5th gen
NVSWITCH NVLink 5 Switch
NVSWITCH GPU-TO-GPU BANDWIDTH 1.8 TB/s
NVLINK BANDWIDTH 14.4 TB/s

AI-Ready Software Stack

NVIDIA Base Command powers the DGX systems, enabling organisations to leverage the best of NVIDIA software innovation. Enterprises can unleash the full potential of their DGX infrastructure with a proven platform that includes enterprise-grade orchestration and cluster management, libraries that accelerate compute, storage and network infrastructure, and an operating system optimised for AI workloads. This is further enhanced by NVIDIA AI Enterprise.

Graphics Cards

NVIDIA AI Enterprise

NVIDIA AI Enterprise unlocks access to a wide range of frameworks that accelerate the development and deployment of AI projects. Leveraging pre-configured frameworks removes many of the manual tasks and complexity associated with software development, enabling you to deploy your AI models faster as each framework is tried, tested and optimised for NVIDIA GPUs. The less time spent developing, the greater the ROI on your AI hardware and data science investments.

Rather than trying to assemble thousands of co-dependent libraries and APIs from different authors when building your own AI applications, NVIDIA AI Enterprise removes this pain point by providing the full AI software stack including applications such as healthcare, computer vision, speech and generative AI.

Enterprise-grade support is provided, 9x5 with a 4-hour SLA with direct access to NVIDIA’s AI experts, minimising risk and downtime, while maximising system efficiency and productivity. A three-year NVIDIA AI Enterprise license is included with 3XS AI workstations with A800 GPUs as standard. You can also purchase one, three- and five-year licenses with other GPUs.

Find out more
NVIDIA Enterprise

Workload Management

Run:ai software allows intelligent resource management and consumption so that users can easily access GPU fractions, multiple GPUs or clusters of servers for workloads of every size and stage of the AI lifecycle. This ensures that all available compute can be utilised and GPUs never have to sit idle. Run:ai’s scheduler is a simple plug-in for Kubernetes clusters and adds high-performance orchestration to your containerised AI workloads.

FIND OUT MORE
AI

AI Optimised Storage

AI Optimised storage appliances ensure that your NVIDIA DGX systems are being utilised as much as possible and always working at maximum efficiency. Scan AI offers software-defined storage appliances powered by PEAK:AIO and further options from leading brands such as Dell-EMC, NetApp and DDN to ensure we have an AI optimised storage solution that is right for you.

FIND OUT MORE
Optimised Storage

Managed Hosting Solutions

AI projects scale rapidly and can consume huge amounts of GPU-accelerated resource alongside significant storage and networking overheads. To address these challenges, the Scan AI Ecosystem includes managed hosting options. We’ve partnered with a number of secure datacentre partners to deliver tailor-made hardware hosting environments delivering high performance and unlimited scalability, while providing security and peace of mind. Organisations maintain control over their own systems but without the day-to-day admin or complex racking, power and cooling concerns, associated with on-premise infrastructure.

FIND OUT MORE
AI

NVIDIA BasePOD & SuperPOD

DGX BasePOD and SuperPOD are NVIDIA reference architectures based around a specific infrastructure kit list that Scan can configure and deploy into your organisation to deliver AI at scale.

PODs start as small as 20 DGX nodes, scaling all the way to 140 nodes, managed by a comprehensive software stack to form a complete cluster.

FIND OUT MORE
Optimised Storage

Start your DGX Journey

The NVIDIA DGX B300 AI is available with three- and five-year support contract, extendable at a later period. There are also comprehensive media retention packages available for more data-sensitive projects.

GPU: 8x NVIDIA B300 288GB – 2.3TB total
CPU: 2x Intel Xeon Platinum
RAM: TBC
System Drives: TBC
Storage Drives: TBC
Networking: 8x 800Gb/s NVIDIA ConnectX-8 InfiniBand/Ethernet and 2x 800Gb/s NVIDIA Bluefield-3 DPUs InfiniBand/Ethernet
Power: TBC
Form Factor: 10U

Enquire now
AI
Scan AI

Get in touch with our AI team.