PNY NVIDIA H200 NVL 141GB PCIE Hopper Data Centre Graphics Card
141GB NVIDIA H200 NVL Data Center Graphics Card, PCIe 5.0, Hopper, HBM3e, NVLink
- 48HR REPLACEMENT If you need to return this item, your replacement will be dispatched within 2 working days of your product arriving back at Scan. More info
Higher Performance with Larger, Faster Memory
The NVIDIA H200 Tensor Core GPU supercharges generative AI and high-performance computing (HPC) workloads with game-changing performance and memory capabilities.
Based on the NVIDIA Hopper™ architecture, the NVIDIA H200 is the first GPU to offer 141 gigabytes (GB) of HBM3e memory at 4.8 terabytes per second (TB/s)- that's nearly double the capacity of the NVIDIA H100 Tensor Core GPU with 1.4X more memory bandwidth. The H200's larger and faster memory accelerates generative AI and large language models, while advancing scientific computing for HPC workloads with better energy efficiency and lower total cost of ownership.
Unlock Insights with High-Performance LLM Inference
In the ever-evolving landscape of AI, businesses rely on large language models to address a diverse range of inference needs. An AI inference accelerator must deliver the highest throughput at the lowest TCO when deployed at scale for a massive user base.
The H200 doubles inference performance compared to H100 GPUs when handling large language models such as Llama2 70B.
Supercharge High-Performance Computing
Memory bandwidth is crucial for HPC applications, as it enables faster data transfer and reduces complex processing bottlenecks. For memory-intensive HPC applications like simulations, scientific research, and artificial intelligence, the H200's higher memory bandwidth ensures that data can be accessed and manipulated efficiently, leading to 110X faster time to results.
Reduce Energy and TCO
With the introduction of H200, energy efficiency and TCO reach new levels. This cutting-edge technology offers unparalleled performance, all within the same power profile as the H100 Tensor Core GPU. AI factories and supercomputing systems that are not only faster but also more eco-friendly deliver an economic edge that propels the AI and scientific communities forward.
Unleashing AI Acceleration for Mainstream Enterprise Servers with H200 NVL
The NVIDIA H200 NVL is the ideal choice for customers with space constraints within the data center, delivering acceleration for every AI and HPC workload regardless of size. With a 1.5X memory increase and a 1.2X bandwidth increase over the previous generation, customers can fine-tune LLMs within a few hours and experience LLM inference 1.8X faster.
Enterprise-Ready: AI Software Streamlines Development and Deployment
NVIDIA H200 NVL is bundled with a five-year NVIDIA AI Enterprise subscription and simplifies the way you build an enterprise AI-ready platform. H200 accelerates AI development and deployment for production-ready generative AI solutions, including computer vision, speech AI, retrieval augmented generation (RAG), and more. NVIDIA AI Enterprise includes NVIDIA NIMTM, a set of easy-to-use microservices designed to speed up enterprise generative AI deployment. Together, deployments have enterprise-grade security, manageability, stability, and support. This results in performance-optimized AI solutions that deliver faster business value and actionable insights.
• 141GB of HBM3e GPU memory.
• 4.8TB/s of memory bandwidth.
• 4 petaFLOPS of FP8 performance.
• 2X LLM inference performance.
• 110X HPC performance.