World’s most advanced data center accelerator for PCIe-based servers
HPC data centers need to support the ever-growing demands of scientists and researchers while staying within a tight budget. The old approach of deploying lots of commodity compute nodes requires huge interconnect overhead that substantially increases costs without proportionally increasing performance.
NVIDIA Tesla P100 GPU accelerators are the most advanced ever built, powered by the breakthrough NVIDIA Pascal™ architecture and designed to boost throughput and save money for HPC and hyperscale data centers. The newest addition to this family, Tesla P100 for PCIe enables a single node to replace half a rack ofcommodity CPU nodes by delivering lightning-fast performance in a broad range of HPC applications.
A GIANT LEAP IN PERFORMANCE
Tesla P100 for PCIe is reimagined from silicon to software, crafted with innovation at every level. Each groundbreaking technology delivers a dramatic jump in performance to substantially boost the data center throughput.
PAGE MIGRATION ENGINE
Simpler programming and computing performance tuning means that applications can now scale beyond the GPU’s physical memory size to virtually limitless levels.
Compute and data are integrated on the same package using Chip-onWafer-on-Substrate with HBM2 technology for 3X memory performance over the previous-generation architecture.
More than 18.7 TeraFLOPS of FP16, 4.7 TeraFLOPS of double-precision, and 9.3 TeraFLOPS of single-precision performance powers new possibilities in deep learning and HPC workloads.
The Tesla P100 for PCIe is the latest addition to the NVIDIA Tesla Accelerated Computing Platform. Key features include:
• Unmatched application performance for mixed-HPC workloads -- Delivering 4.7 teraflops and 9.3 teraflops of double-precision and single-precision peak performance, respectively, a single Pascal-based Tesla P100 node provides the equivalent performance of more than 32 commodity CPU-only servers.
• CoWoS with HBM2 for unprecedented efficiency -- The Tesla P100 unifies processor and data into a single package to deliver unprecedented compute efficiency. An innovative approach to memory design -- chip on wafer on substrate (CoWoS) with HBM2 -- provides a 3x boost in memory bandwidth performance, or 720GB/sec, compared to the NVIDIA Maxwell™ architecture.
• PageMigration Engine for simplified parallel programming -- Frees developers to focus on tuning for higher performance and less on managing data movement, and allows applications to scale beyond the GPU physical memory size with support for virtual memory paging. Unified memory technology dramatically improves productivity by enabling developers to see a single memory space for the entire node.
• Unmatched application support -- With 410 GPU-accelerated applications, including nine of the top 10 HPC applications, the Tesla platform is the world's leading HPC computing platform.
Tesla P100 for PCIe Specifications:
• 4.7 teraflops double-precision performance, 9.3 teraflops single-precision performance and 18.7 teraflops half-precision performance with NVIDIA GPU BOOST™ technology
• Support for PCIe Gen 3 interconnect (32GB/sec bi-directional bandwidth)
• Enhanced programmability with Page Migration Engine and unified memory
• ECC protection for increased reliability
• Server-optimized for highest data center throughput and reliability
• Available in two configurations:
16GB of CoWoS HBM2 stacked memory, delivering 720GB/sec of memory bandwidth
12GB of CoWoS HBM2 stacked memory, delivering 540GB/sec of memory bandwidth
Find Similar Products
Find similar products with the same specifications as 12GB PNY NVIDIA Tesla P100 module, PCIe 3.0 (x16), HBM2, GPU TBA, 3584 Cores, 9.3 TFLOPS SP, 4.7 TFLOPS DP, Passive . Please choose from the list below.