PNY NVIDIA H100 NVL 94GB HBM3 Hopper Data Centre Graphics Card
94GB NVIDIA H100 NVL Data Center Graphics Card, PCIe 5.0, Hopper, HBM3, NVLink
- 48HR REPLACEMENT If you need to return this item, your replacement will be dispatched within 2 working days of your product arriving back at Scan. More info
PNY NVIDIA H100 94GB NVL Data Center Graphics Card
The NVIDIA® H100 NVL Tensor Core GPU is the most optimised platform for LLM Inferences with its high compute density, high memory bandwidth, high energy efficiency, and unique NVLink architecture. It also delivers unprecedented acceleration to power the world’s highest-performing elastic data centers for AI, data analytics, and high-performance computing (HPC) applications. NVIDIA H100 NVL Tensor Core technology supports a broad range of math precisions, providing a single accelerator for every compute workload. The NVIDIA H100 NVL supports double precision (FP64), singleprecision (FP32), half precision (FP16), 8-bit floating point (FP8), and integer (INT8) compute tasks.
The NVIDIA H100 NVL card is a dual-slot 10.5 inch PCI Express Gen5 card based on the NVIDIA Hopper™ architecture. It uses a passive heat sink for cooling, which requires system airflow to operate the card properly within its thermal limits. The NVIDIA H100 NVL operates unconstrained up to its maximum thermal design power (TDP) level of 400 W to accelerate applications that require the fastest computational speed and highest data throughput.
The NVIDIA H100 NVL debuts the world’s highest PCIe card memory bandwidth of nearly 4,000 gigabytes per second (GBps). This speeds time to solution for the largest models and most massive data sets. The NVIDIA H100 NVL card features Multi-Instance GPU (MIG) capability. This can be used to partition the GPU into as many as seven hardware isolated GPU instances, providing a unified platform that enables elastic data centers to adjust dynamically to shifting workload demands. As well as it can allocate the right size of resources from the smallest to biggest multi-GPU jobs.
NVIDIA H100 NVL versatility means that IT managers can maximise the utility of every graphics processing unit (GPU) in their data center. NVIDIA H100 NVL cards use three NVIDIA® NVLink® bridges. They are the same as the one used with NVIDIA H100 PCIe cards. This allows two NVIDIA H100 PCIe cards to be connected to deliver 600 GB/s bidirectional bandwidth or 10x the bandwidth of PCIe Gen4, to maximize application performance for large workloads.
Features
Manufacturing Process: 4nm.
Memory: 94GB HBM3.
Memory Bandwidth: 3,938 GB/s.
Power Consumption: 400W.
Specifications
FP64: 68 teraFLOPs
FP64 Tensor Core: 134 teraFLOPS
FP32: 134 teraFLOPS
TF32 Tensor Core: 1979 teraFlops (with Sparsity)
FP16 Tensor Core: 7916 teraFLOPS (with Sparsity)
FP8 Tensor Core: 7916 teraFLOPS (with Sparsity)
INT8 Tensor Core: 7916 TOPS (with Sparsity)
GPU Memory:188GB
GPU Memory Bandwidth: 7.8TB/s (Aggregate HBM Bandwidth)
Please Note: Above specifications are based on 2x H100 NVL cards paired with NVLINK Bridge.