Scan code: LN118801 Manufacturer code: TCSA100M-80GB-PB

PNY NVIDIA A100 80GB HBM2 Ampere Graphics Card

PNY NVIDIA A100 80GB HBM2 Graphics Card, 6912 Cores, 19.5 TFLOPS SP, 9.7 TFLOPS DP

Thumbnail 1 : PNY NVIDIA A100 80GB HBM2 Ampere Graphics Card Thumbnail 2 : PNY NVIDIA A100 80GB HBM2 Ampere Graphics Card Thumbnail 3 : PNY NVIDIA A100 80GB HBM2 Ampere Graphics Card
Scan code: LN118801 Manufacturer code: TCSA100M-80GB-PB
Delivery Options
  • By DPD to your specified address. | £11.50 Receive SMS with one-hour delivery window Weekend, timed and European delivery options are available at checkout

    Have your parcel delivered by DPD to your specified address. Receive SMS with one-hour delivery window

    Weekend, timed and European delivery options are available at checkout
  • Collect instore Collect from our Bolton store, BL6 6PE | Free

    Order online, collect from our Bolton store (25-28 Enterprise Park, Middlebrook, Horwich, Bolton, BL6 6PE)

  • 48HR REPLACEMENT If you need to return this item, your replacement will be dispatched within 2 working days of your product arriving back at Scan. More info

PNY NVIDIA A100 80GB Tensor Core GPU with Ampere Architecture

A100 GPU

Unprecedented Acceleration at Every Scale PNY NVIDIA A100 Tensor Core GPU provides users with supreme acceleration and power. The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale for AI, data analytics, and HPC to tackle the world’s toughest computing challenges. As the engine of the NVIDIA data center platform, A100 can efficiently scale up to thousands of GPUs or, using new Multi-Instance GPU (MIG) technology, can be partitioned into seven isolated GPU instances to accelerate workloads of all sizes. A100’s third-generation Tensor Core technology now accelerates more levels of precision for diverse workloads, speeding time to insight as well as time to market. The NVIDIA Ampere architecture, designed for the age of elastic computing, delivers the next giant leap by providing unmatched acceleration at every scale, enabling these innovators to do their life’s work.

80GBHBM2 GPU Memory

PCI Express 4.0 x16System interface

6,912CUDA Cores

1.935 TB/secMemory Bandwidth

Groundbreaking Innovative Technology


NVIDIA AMPERE ARCHITECTUREA100 accelerates workloads big and small. Whether using MIG to partition an A100 GPU into smaller instances, or NVLink to connect multiple GPUs to accelerate large-scale workloads, A100 can readily handle different-sized acceleration needs, from the smallest job to the biggest multi-node workload. A100’s versatility means IT managers can maximize the utility of every GPU in their data center around the clock.


THIRD-GENERATION TENSOR CORESA100 delivers 312 teraFLOPS (TFLOPS) of deep learning performance. That’s 20X Tensor FLOPS for deep learning training and 20X Tensor TOPS for deep learning inference compared to NVIDIA Volta™ GPUs.


NEXT-GENERATION NVLINKNVIDIA NVLink in A100 delivers 2X higher throughput compared to the previous generation. When combined with NVIDIA NVSwitch™, up to 16 A100 GPUs can be interconnected at up to 600 gigabytes per second (GB/sec) to unleash the highest application performance possible on a single server. NVLink is available in A100 SXM GPUs via HGX A100 server boards and in PCIe GPUs via an NVLink Bridge for up to 2 GPUs.


MULTI-INSTANCE GPU (MIG)An A100 GPU can be partitioned into as many as seven GPU instances, fully isolated at the hardware level with their own high-bandwidth memory, cache, and compute cores. MIG gives developers access to breakthrough acceleration for all their applications, and IT administrators can offer right-sized GPU acceleration for every job, optimizing utilization and expanding access to every user and application.


HBM2With up to 80 gigabytes of HBM2e, A100 delivers the world’s fastest GPU memory bandwidth of over 2TB/s, as well as a dynamic random-access memory (DRAM) utilization efficiency of 95%. A100 delivers 1.7X higher memory bandwidth over the previous generation.


STRUCTURAL SPARSITYAI networks are big, having millions to billions of parameters. Not all of these parameters are needed for accurate predictions, and some can be converted to zeros to make the models “sparse” without compromising accuracy. Tensor Cores in A100 can provide up to 2X higher performance for sparse models. While the sparsity feature more readily benefits AI inference, it can also improve the performance of model training.

How Fast is the A100?

DGX Configuration

Up to 249X Higher AI Inference Performance over CPUs The NVIDIA A100 Tensor Core GPU is the flagship product of the NVIDIA data center platform for deep learning, HPC, and data analytics. The platform accelerates over 700 HPC applications and every major deep learning framework. It’s available everywhere, from desktops to servers to cloud services, delivering both dramatic performance gains and cost-saving opportunities.

Key Specifications
Graphics Chipset NVIDIA A100
Edition A100 80 GB
Manufacturing process  
Microarchitecture Ampere
GPU Name  
Cores & Clocks
NVIDIA CUDA Cores 6,912
Core Clock  
Boost Clock  
Video Memory (VRAM)
Memory Size 80 GB
Memory Type HBM2
Memory Clock  
Memory Bit Rate 5120 Bit
Memory Bandwidth 1935 GB/s
ECC Technology Yes
Cooling Passive
I/O & Connectivity
Interface PCIe 4.0 (x16)
Interface Bandwidth  
Graphics Output  
Multi GPU Support Supported
NVLink Support N/A
Total NVLink Bandwidth N/A
Microsoft DirectX Support  
HDCP Ready No
Multi Monitor Support  
Concurent Users N/A
H.264 1080p30 Streams N/A
Maximum Digital Resolution  
Maximum VGA Resolution  
Supported Graphics APIs  
Compute Performance
Supported Compute APIs  
Single Precision (FP32) Processing Yes
Single Precision (FP32) Performance 19.5 teraFLOPS
Double Precision (FP64) Processing Yes
Double Precision (FP64) Performance 9.7 teraFLOPS
Tensor Processing Yes
Deep Learning (Tensor) Performance  
NVIDIA Tensor Cores  
Integer Operations (INT8)  
Rays Cast  
Power & Thermals
Graphics Card Power Connectors 1 x 8-pin PCIe
Board Power  
Minimum Recommended PSU 300 W
Maximum GPU Temperature  
Form Factor Dual Slot
Low Profile Compatible No
Low Profile Support  
Dimensions 111.15 (H) x 267.7 (L) mm
Package Type Retail
Additional Information
Scan Code LN118801
Model Number TCSA100M-80GB-PB
GTIN 3536403386689

Please note your statutory rights are not affected.

For further information regarding Scan's warranty procedure please see our terms and conditions

36 months
Return to base
DOA Period:
28 days
RTB Period:
24 months
Manufacturer Contact Details
0871 472 4747
PNY NVIDIA A100 80GB HBM2 Ampere Graphics Card
PNY NVIDIA A100 80GB HBM2 Ampere Graphics Card