Scan code:
LN89207
Manufacturer code:
TCSV100M-32GB-PB
NVIDIA TESLA Volta V100 32GB PCIe GPU Accelerator Card
32GB PNY NVIDIA Tesla V100, PCIe 3.0 (x16), Volta V100, 5120 CUDA + 640 Tensor Cores, ECC HBM2, 14 TF SP, 7 TF DP
Scan code:
LN89207
Manufacturer code:
TCSV100M-32GB-PB
End Of Life
This product is no longer available to purchase
Delivery Options
- By DPD to your specified address. | £11.50 Receive SMS with one-hour delivery window Weekend, timed and European delivery options are available at checkout
- Collect instore Collect from our Bolton store, BL6 6PE | Free
- 48HR REPLACEMENT If you need to return this item, your replacement will be dispatched within 2 working days of your product arriving back at Scan. More info
Product Overview
The Most Advanced Data Center GPU Ever Built - Now with 32GB of HBM2 Memory for Even The Largest Workloads
NVIDIA® Tesla® V100 is the world’s most advanced data center GPU ever built to accelerate AI, HPC, and graphics. Powered by NVIDIA Volta™, the latest GPU architecture, Tesla V100 offers the performance of up to 100 CPUs in a single GPU—enabling data scientists, researchers, and engineers to tackle challenges that were once thought impossible.
Tesla V100 is the flagship product of Tesla data center computing platform for deep learning, HPC, and graphics. The Tesla platform accelerates over 450 HPC applications and every major deep learning framework. It is available everywhere from desktops to servers to cloud services, delivering both dramatic performance gains and cost savings opportunities.
NVIDIA VOLTA: The New GPU Architecture, Designed to Bring AI to Every Industry
The Core of AI
AI is not defined by any one industry. It exists in fields of supercomputing, healthcare, financial services, big data analytics, and gaming. It is the future of every industry and market because every enterprise needs intelligence, and the engine of AI is the NVIDIA GPU computing platform.
NVIDIA Volta™ is the new driving force behind artificial intelligence. Volta will fuel breakthroughs in every industry. Humanity’s moonshots like eradicating cancer, intelligent customer experiences, and self-driving vehicles are within reach of this next era of AI.
Ground-Breaking Innovations
640 Tensor Cores: An Exponential Leap in Performance
Every industry needs AI, and with this massive leap forward in speed, AI can now be applied to every industry. Equipped with 640 Tensor Cores, Volta delivers over 100 Teraflops per second (TFLOPS) of deep learning performance, over a 5X increase compared to prior generation NVIDIA Pascal™ architecture.
New GPU Architecture: Engineered for the Modern Computer
Humanity’s greatest challenges will require the most powerful computing engine for both computational and data science. With over 21 billion transistors, Volta is the most powerful GPU architecture the world has ever seen. It pairs NVIDIA® CUDA® and Tensor Cores to deliver the performance of an AI supercomputer in a GPU.
Volta-Optimized Software: GPU-Accelerated Frameworks and Applications
Data scientists are often forced to make trade-offs between model accuracy and longer run-times. With Volta-optimized CUDA and NVIDIA Deep Learning SDK libraries like cuDNN, NCCL, and TensorRT, the industry’s top frameworks and applications can easily tap into the power of Volta. This propels data scientists and researchers towards discoveries faster than before.
Supports Every Deep Learning Framework:
• Caffe2
• Microsoft Cognitive Toolkit
• Mxnet
• Pytorch
• TensorFlow
• Theano
Supports 450+ GPU-Accelerated Applications Including:
• AMBER
• ANSYS Fluent
• GAUSSIAN
• GROMACS
• LS-DYNA
• NAMD
• OpenFOAM
• Simulia Abaqus
• VASP
• WRF Features Volta Architecture
By pairing CUDA Cores and Tensor Cores within a unified architecture, a single server with Tesla V100 GPUs can replace hundreds of commodity CPU servers for traditional HPC and Deep Learning.
Tensor Core
Equipped with 640 Tensor Cores, Tesla V100 delivers 120 TeraFLOPS of deep learning performance. That’s 12X Tensor FLOPS for DL Training, and 6X Tensor FLOPS for DL Inference when compared to NVIDIA Pascal™ GPUs.
Maximum Efficiency Mode
The new maximum efficiency mode allows data centers to achieve up to 40% higher compute capacity per rack within the existing power budget. In this mode, Tesla V100 runs at peak processing efficiency, providing up to 80% of the performance at half the power consumption.
HBM2
With a combination of improved raw bandwidth of 900 GB/s and higher DRAM utilization efficiency at 95%, Tesla V100 delivers 1.5X higher memory bandwidth over Pascal GPUs as measured on STREAM.
Programmability
Tesla V100 is architected from the ground up to simplify programmability. Its new independent thread scheduling enables finer-grain synchronization and improves GPU utilization by sharing resources among small jobs.
NVIDIA® Tesla® V100 is the world’s most advanced data center GPU ever built to accelerate AI, HPC, and graphics. Powered by NVIDIA Volta™, the latest GPU architecture, Tesla V100 offers the performance of up to 100 CPUs in a single GPU—enabling data scientists, researchers, and engineers to tackle challenges that were once thought impossible.
Tesla V100 is the flagship product of Tesla data center computing platform for deep learning, HPC, and graphics. The Tesla platform accelerates over 450 HPC applications and every major deep learning framework. It is available everywhere from desktops to servers to cloud services, delivering both dramatic performance gains and cost savings opportunities.
NVIDIA VOLTA: The New GPU Architecture, Designed to Bring AI to Every Industry
The Core of AI
AI is not defined by any one industry. It exists in fields of supercomputing, healthcare, financial services, big data analytics, and gaming. It is the future of every industry and market because every enterprise needs intelligence, and the engine of AI is the NVIDIA GPU computing platform.
NVIDIA Volta™ is the new driving force behind artificial intelligence. Volta will fuel breakthroughs in every industry. Humanity’s moonshots like eradicating cancer, intelligent customer experiences, and self-driving vehicles are within reach of this next era of AI.
Ground-Breaking Innovations
640 Tensor Cores: An Exponential Leap in Performance
Every industry needs AI, and with this massive leap forward in speed, AI can now be applied to every industry. Equipped with 640 Tensor Cores, Volta delivers over 100 Teraflops per second (TFLOPS) of deep learning performance, over a 5X increase compared to prior generation NVIDIA Pascal™ architecture.
New GPU Architecture: Engineered for the Modern Computer
Humanity’s greatest challenges will require the most powerful computing engine for both computational and data science. With over 21 billion transistors, Volta is the most powerful GPU architecture the world has ever seen. It pairs NVIDIA® CUDA® and Tensor Cores to deliver the performance of an AI supercomputer in a GPU.
Volta-Optimized Software: GPU-Accelerated Frameworks and Applications
Data scientists are often forced to make trade-offs between model accuracy and longer run-times. With Volta-optimized CUDA and NVIDIA Deep Learning SDK libraries like cuDNN, NCCL, and TensorRT, the industry’s top frameworks and applications can easily tap into the power of Volta. This propels data scientists and researchers towards discoveries faster than before.
Supports Every Deep Learning Framework:
• Caffe2
• Microsoft Cognitive Toolkit
• Mxnet
• Pytorch
• TensorFlow
• Theano
Supports 450+ GPU-Accelerated Applications Including:
• AMBER
• ANSYS Fluent
• GAUSSIAN
• GROMACS
• LS-DYNA
• NAMD
• OpenFOAM
• Simulia Abaqus
• VASP
• WRF Features Volta Architecture
By pairing CUDA Cores and Tensor Cores within a unified architecture, a single server with Tesla V100 GPUs can replace hundreds of commodity CPU servers for traditional HPC and Deep Learning.
Tensor Core
Equipped with 640 Tensor Cores, Tesla V100 delivers 120 TeraFLOPS of deep learning performance. That’s 12X Tensor FLOPS for DL Training, and 6X Tensor FLOPS for DL Inference when compared to NVIDIA Pascal™ GPUs.
Maximum Efficiency Mode
The new maximum efficiency mode allows data centers to achieve up to 40% higher compute capacity per rack within the existing power budget. In this mode, Tesla V100 runs at peak processing efficiency, providing up to 80% of the performance at half the power consumption.
HBM2
With a combination of improved raw bandwidth of 900 GB/s and higher DRAM utilization efficiency at 95%, Tesla V100 delivers 1.5X higher memory bandwidth over Pascal GPUs as measured on STREAM.
Programmability
Tesla V100 is architected from the ground up to simplify programmability. Its new independent thread scheduling enables finer-grain synchronization and improves GPU utilization by sharing resources among small jobs.