PNY NVIDIA DGX 8x B200 1,440GB AI Server System
PNY NVIDIA DGX B200 Deep Learning AI System, 8x B200 GPUs, 1440GB HBM3e, 72 petaFLOPS FP8 Performance, 3yr std Support
PNY NVIDIA DGX B200 AI Infrastructure System
The Ultimate AI Training ApplianceThe sixth-generation DGX datacentre AI appliance is built around the Blackwell architecture and the flagship B200 accelerator, providing unprecedented training and inferencing performance in a single system. The DGX B200 includes 400Gb/s Connect-X7 Smart NICs and Bluefield DPUs for connecting to external storage, supported by the NVIDIA Base Command management suite and the NVIDIA AI Enterprise software stack, backed by specialist technical advice from NVIDIA DGXperts.
Powering the Next Generation of AI
Your AI Centre of ExcellenceArtificial intelligence is transforming almost every business by automating tasks, enhancing customer service, generating insights, and enabling innovation. It’s no longer a futuristic concept but a reality that’s fundamentally reshaping how businesses operate. However, as AI workloads continue to develop, they’re beginning to require significantly more compute capacity than most enterprises have available. To leverage AI, enterprises need high-performance computing, storage, and networking capabilities that are secure, reliable, and efficient.
Enter NVIDIA DGX™ B200, the latest addition to the NVIDIA DGX platform. This unified AI platform defines the next chapter of generative AI by taking full advantage of NVIDIA Blackwell GPUs and high-speed interconnects. Configured with eight Blackwell GPUs, DGX B200 delivers unparalleled generative AI performance with a massive 1.4 terabytes (TB) of GPU memory and 64 terabytes per second (TB/s) of HBM3e memory bandwidth, and 14.4 TB/s of all-to-all GPU bandwidth, making it uniquely suited to handle any enterprise AI workload.
With NVIDIA DGX B200, enterprises can equip their data scientists and developers with a universal AI supercomputer to accelerate their time to insight and fully realise the benefits of AI for their businesses.
An Order-of-Magnitude Leap for Accelerated Computing
Proven Infrastructure StandardNVIDIA DGX B200 is the world’s first system with the NVIDIA Blackwell GPU, delivering breakthrough performance for the world’s most complex AI problems, such as large language models and natural language processing. DGX B200 offers a fully optimised hardware and software platform that includes the complete NVIDIA AI software stack, a rich ecosystem of third-party support, and access to expert advice from NVIDIA professional services, allowing organisations to solve the biggest and most complex business problems with AI.
One Platform for Develop-to-DeployAs AI workflows have become more sophisticated, so too has the need for enterprises to handle large datasets at all stages of the AI pipeline, from training to fine-tuning to inference. This requires massive amounts of compute power. With NVIDIA DGX B200, enterprises can arm their developers with a single, unified platform built to accelerate their workflows. Supercharged for the next generation of generative AI, businesses can infuse AI into their daily operations and customer experiences with DGX B200.
Powerhouse of AI PerformanceNVIDIA is dedicated to designing the the world’s most powerful supercomputers, built to tackle the most complex AI problems that enterprises face. DGX B200 is the latest addition to the NVIDIA accelerated computing platform. Powered by the NVIDIA Blackwell architecture, DGX B200 delivers 3X the training performance and 15X the inference performance of DGX H100. As the foundation of NVIDIA DGX POD™ reference architectures, DGX B200 offers high-speed scalability for NVIDIA DGX BasePOD™ and NVIDIA DGX SuperPOD™, delivering top-of-the-line performance in a turnkey AI infrastructure solution.
NVIDIA Base Command
Powers the DGX platform, enabling organisations to leverage the best of NVIDIA software innovation. Enterprises can unleash the full potential of their DGX infrastructure with a proven platform that includes enterprise-grade orchestration and cluster management, libraries that accelerate compute, storage, and network infrastructure, and an operating system optimised for AI workloads. Additionally, DGX infrastructure includes NVIDIA AI Enterprise, a suite of software optimised to streamline AI development and deployment.
AI Workflow Management and MLOpsGet more models from prototype to production.
Job Scheduling & OrchestrationEnsure hassle-free execution of every developer's jobs.
Kubernetes
Slurm
Cluster ManagementEffortlessly scale and manage one node to thousands.
Provisioning
Monitoring
Clustering
Managing
Network/Storage Acceleration Libraries & ManagementAccelerate end-to-end infrastructure performance.
Network IO
Storage IO
In-network Compute
IO Management
DGX OS Extensions for Linux DistributionsMaximise system uptime security and reliability.
A New Era of Performance
GPU:8x NVIDIA Blackwell (1440GB total)
CPU:2x Intel® Xeon® Platinum 8570 - 112 Cores
RAM:2TB
System Drives:2x 1.9TB NVMe SSDs
Storage Drives:8x 3.84TB NVMe SSDs
Networking:8x 400Gb/s NVIDIA ConnectX-7 VPI InfiniBand/Ethernet and 2x 400Gb/s NVIDIA Bluefield-3 DPU InfiniBand/Ethernet
Power:14.3kW
Form Factor:10U
Find out more: NVIDIA DGX B200